Skip to main content

Using Leapfrog Edge for dynamic grade control in underground mines

Join us for a 30-minute webinar on Using Edge for Underground Grade Control. The presentation will include an overview of the Edge block modelling extension, its place in the Seequent solution, and examples of workflows for short term planning and ore control.



Steve Law
Senior Project Geologist – Seequent


36 min

See more on demand videos


Find out more about Seequent's mining solution

Learn more

Video Transcript

(upbeat music)

<v Steve>Thank you for taking the time today</v>

to listen to this webinar.

My name is Steve Law and I have been

a Senior Project Geologists with Seequent

for the past two and a half years.

I am the Technical Lead for Leapfrog Edge

and new product Pro 3D.

My background is primarily as a Senior Geologist

in Production and Resource Geology roles.

I will be presenting an example workflow

of how to use Seequent Mining Solutions software

to enhance the Grade Control process

in an underground operating environment.

Most aspects could also be utilized

in an open cut operation as well.

The key point of today’s demonstration

is to show the dynamic linkages

between Leapfrog Geo and Leapfrog Edge

managed within the framework of Seequent Central.

I will briefly touch on data preparation

and an example of workflow

splitting projects by discipline.

While it’s not designed as a training session,

I will cover the basic domain centric setup within Edge

and show the different components, including variography.

I will focus a little more on the Block Model setup

and how to validate the Grade Control Model

and report results as this part of the workflow

is a little different to how you may be used to doing it.

The Seequent Solution encompasses

a range of software products

applicable for use across the mining value chain.

Today I’ll focus on Seequent Central,

Leapfrog Geo and Leapfrog Edge,

which integrated together present the opportunity

to deliver key messages to different stakeholders.

Leapfrog Geo remains the main driver,

but I will show how by utilizing

the benefits of Edge and Central,

the workflow in an Underground Grade Control environment

can be enhanced.

Grade Control is the process

of maximizing value in reducing risk.

It requires the delivery of tons

in an optimum grade to the mill

via the accurate definition of ore and waste.

It essentially comprises data collection,

integration and interpretation, local resource estimation,

stope design, supervision of mining,

and stockpile management and reconciliation.

The demonstration today will take us up to the point

where the model would be passed

on to mine planning and scheduling.

The foundation of all Grade Control Programs

should be that of geological understanding.

Variations require close knowledge of the geology

to ensure optimum grade, minimal dilution

and maximum mining recovery.

By applying geological knowledge,

the mining process can be both efficient and cost-effective.

The integration and dynamic linkage of the geology

with the Grade Control Resource System

leads to time efficiencies.

We can reduce risk with better communication

via the Central Interface,

which offers a single source of truth

with full version control

member permissions and trackability.

Better decisions are achievable

as teams work together to collaborate

on the same data and models.

Everyone can discuss and explore

different geological interpretations

and share detailed history of the models.

Central provides a quality framework

for quality assurance as an auditable record

and timeline of all project revisions is maintained.

For those of you who have not been exposed

to Seequent Central, here is a brief outline.

It is a server based project management system

with three gateways for different users and management.

The Central portal is where administrators

can assign permissions for projects stored on the server.

Users will only see the projects

they have permission to view or edit.

The Central Browser allows users to view and compare

any version of the Leapfrog Project

within the store timeline.

The models cannot be edited,

but output meshes can be shared with users

who do not require direct access to Leapfrog Geo.

There is also a data room functionality,

which equates to a Dropbox type function

assigned to each project.

Today, I will focus on the Leapfrog Geo Connector,

which is where we can define various workflow structures.

Specifically, we will focus on one

where the projects are split

according to geology modeling or estimation functions.

I will now move on to the software demonstrations.

I have a project set up here,

which has been set up with three separate branches

a geology branch, an estimation branch,

and an engineering branch.

I will primarily be focusing

on the geology and estimation branches.

The way Central works is that you store

your individual projects on a server

and each iteration of the project

is saved in this timeline.

So the most recent project is always

the one at the top of the Branch.

So if we’re looking at Geology Branch here can see Geology,

we go to the top level of Geology.

And then this is the latest geology project.

Again, for estimation,

we followed through until we find the top most branch,

and this is the top most and most recent resource estimate.

To work with Central,

we download the projects by right clicking

on the required branch and they are then local copies.

So in this instance,

I have four local copies stored on my computer.

I am no longer connected to the cloud server

for working on these, I can go offline, et cetera.

The little numbers, 1, 2, 3, and four

refer to the instances up here.

So I always know which project

I’m working on at any one time.

For this demonstration,

I’m going to start off with the Geology model

as set up as a pre-mining resource.

So down here at Project 1.

So the data set we’re working with here

is a series of surface drill holes,

diamond drill holes, primarily.

And it’s basically a vein system.

And I have set up a simple series of four veins

as part of a vein system.

The main vein is Vein 1,

and then we have Veins two, three, and four

coming off as splice.

This model has been set up in a standard way.

So we have the geological model

and we’re going to use this

as the basis for doing an estimate

on veins 1, 2, 3, and four.

If you did not have Central,

then the Block Model would need to be created

within this project under the block models

and using the Estimation folder.

And then if you were going to update,

you would potentially freeze the Block Model

by using the new Freeze/Unfreeze function

and then to be updating the Geological model.

The Block Model won’t change until we unfreeze it.

The problem with this is that we don’t ever have

a record of what the previous model looked like

unless we zipped the project

and dated it and stored it somewhere.

The advantage of Central is that each iteration

is stored so that we can always go back

and see what the model looked like beforehand.

So the Geology Model has been set up.

I’m going to go now into the first estimation project

where I set up the pre-mining resource.

So in this case, this is local copy two,

and the branch is estimation.

(upbeat music)

The key difference here when using Central,

is that rather than directly linking

to a Geological Model within the project,

the domains are coming from centrally linked meshes.

So under the Meshes folder, you can see

we have these little blue mesh symbols.

And if I right click on these it’s reload latest on Branch

or from the project history.

The little clock is telling me that

since the last time I opened up this project,

something has changed in the underlying Geological Model.

Once we go reload, latest on project,

then the little clock will disappear.

So all the meshes that I’m using within the estimation

accessing directly from centrally linked meshes.

I’m just going to focus now on a quick run

through of how to set up a domain estimation

in Edge using Vein 1.

Edge works on a domain by domain basis.

So, and we work on one domain and one element,

and then we can copy and keep going through

until we create all four or more.

Each estimation is its own little folder.

So we have here, AU PPM Vein 1,

and we have within that a series of sub folders.

We can check quickly what the domain is

to make sure that we’re working in the space

that we expect to be.

And the values shows us the composites,

but it’s the midpoints of the intervals.

So it’s been reduced to point data,

which we can still display as normal drill holes.

And we can do statistics on that

and have a look at a histogram, et cetera,

Log Probability plots.

There is also a normal score functionality.

So if particularly when working with gold data sets

that are quite skewed,

we may wish to transform the values to normal scores.

So to do that,

we would right click on the values

and there will be a transformed values function

if it hasn’t been run already as it has been here.

So that produces a normal scores,

which is purely changing the data to a normal distribution.

And it decreases the influence of the high values,

whilst the variogram is being calculated

and is then back transformed into real space

before the estimate is run.

If we wish to edit our domain or our source data,

we can always go back to opening up at this level.

This is where we choose the domain.

And we can see here that it’s directly

linked back to the Central meshes here.

And then the numeric values

can come from any numeric table within our projects.

By the roar drill hole data or composite data if we have it,

but we have the option of compositing at this point,

which is what I’ve chosen to do.

The compositing done here is exactly the same

as if the compositing is done

within the drill hole database first,

it just depends on some processes

or whether it needs to be done there or here.

(upbeat music)

Variography done is all done

within the Special Models folder.

In this instance,

I’ve done a transformed variogram on the transform values

because it gives a better result.

The way we treat variography is too dissimilar

to other software packages.

We’re still modeling into three prime directions

relative to our plan of continuity.

The radial plot is within the our plane.

So in this case,

it’s set along the direction of the vein

and then we’re changing the red arrow.

It’s changing pitching plans.

The key thing when using Leapfrog is that the variogram

is always linked through to the 3D scene.

So if we move anything within forms,

then the ellipse itself will be changing

in the 3D scene and vice versa.

We can move the ellipse and it will change

the graphs in this side as well.

We have capability.

This is a normal scores one.

So once the modeling has been done

and we can manually change using the little leavers

or by just typing in the results through here

and hit save, we do need to use the back transform,

so that will process it.

So that is ready to use directly in the estimates.

Discard those changes.

Okay, another good feature that we have within Edge

is the possibility of using variable orientation,

which is effectively dynamic and atrophy.

So locally takes into account local changes

in the underlying domain geometry.

We can use any open mesh to set this up.

In this case,

I’m using a vein hanging wall surface to develop this.

This has been,

again, these surfaces have been imported

directly from the Geology Model

into a centrally linked meshes.

So as the geology model changes,

these will also be able to be updated if necessary.

There was a visualization of the variable orientation,

and that is in the form of these small disks,

which you can define on a grid

and you can display dip direction or dip.

And if we have a look in a section,

we can get an idea of how the ellipsoid will change.

So in this way,

the variogram will move through

and change its orientation relative to the local geometry,

rather than having to be set

at that sort of more averaged orientation

across the whole vein.

Sample Geometry is a couple of declustering to

which I will not go into at this stage.

The key folder that we need to work with

is the Estimates folder.

And this is where we can set up either inverse distance,

Nearest Neighbor, ordinary or simple kriging,

or RVF estimated the same type of algorithm

that we use under our geological modeling

and numeric models.

In this instance, I’ve set up two passes for the kriging.

So I can open up both of those at the same time.

(upbeat music)

We can set top cutting at this level.

So in this case, I’ve cut top cut to 50.

We don’t have to do it there,

If we can’t, we could do it in the composite file earlier,

but it wouldn’t need to be done outside of Leapfrog first.

The interplant tells whether it’s ordinary kriging

or simple kriging, and we defined a discretization,

which is based on the parent block.

So this is dividing the parent block that dimension

into four by four by four.

We picked a Variogram Model.

So it was only one to choose from.

The estimates can only see the variograms

that are sitting within the particular folder

that you’re working from,

but you can have as many different variogram models

as you like to choose from

and test different parameters.

The Ellipsoid, so when we first developed this,

it is usually set to the variogram,

and then in this case,

we are overriding it by using the variable orientation.

So it keeps maintains the ranges,

but changes the orientation as relative to the mesh.

You can see here that the second search that I’ve set up

is simply double the ranges of the first.

We have a series of fairly standard

search criteria and restrictions.

So minimum maximum number of samples,

outlier restriction, which is a form of top cutting,

but not quite as severe.

So you only top cutting

high values beyond a certain distance.

Sector search as both Oakton or quadrant

and Drillhole limit,

which is maximum number of samples per drill hole.

In this case of telling this effectively

says that I must have at least one drill hole to proceed.

The second search is looking at a greater distance,

reducing down to number one sample and no restrictions.

And this is simply to try and get a value

inside all the blocks, which is needed

for reporting later on.

One point of difference for Edge compared to other software

is how we define variables.

So the grade variable name is this name down here,

which is automatically assigned,

but can be changed then rather than having

to set up variables within a Block Model definition,

we simply tick the boxes for the ones

that we may want to store.

So if we want to store kriging variance,

we’d simply tick the box here.

It doesn’t matter if we don’t tick it individually

and decide you need it later.

You can always come back to here

and tick the boxes as necessary.

I find that it is useful not to tick them

if you don’t need them,

’cause it makes the processes quite busy later on.

So let us set up the parameters.

So estimation folder can be thought of as a parameter file.

And this is where we set up

all the parameters for our estimate.

One key concept that we need to understand

is the concept of a combined estimator.

If I evaluate or view these runs,

Run 1 and Run 2 in the Block Model,

I’ll only be able to display either one,

Run 1, or Run 2 at a time,

as well as I’ve got four veins set up.

I can only see each vein individually.

So I need to combine them together

to be able to view the whole goal grade

of everything at once.

And this is done by creating a new combined estimator,

which is simply like a merged table

from the geological database.

So in this instance, I’ve set up all of the veins,

and each pass for each.

The order of the domains isn’t critical

because they’re all independent of each other,

but the order of the passes is very important.

So Pass 1 must be on top higher priority,

and then that means that that is looked at first

then any blocks that do not have a value in them

after that is run, we’ll then use the Pass 2 parameters.

If we inadvertently price Pass 2 above,

then it festively overrides Pass 1.

It is re-estimating every block with Pass 2 parameters

as it does it.

So the order is quite important.

As you add new domains,

then you can simply select that’ll be available

over here on the left, and you can move them across at will.

Okay, this is often the variable that is exported

to other software packages in a Block Model.

So I tend to try and keep

the naming of these quite short and simple,

because a lot of packages are limited

to how many characters they can accept.

Again, outputs can be calculated

for the combined estimate

so we can store any of these.

And it does automatically assign an estimator number

and a domain number.

So if we have two passes,

then will slightly code the passes

slightly different shades.

In this instance,

like pass one and two slightly different grades of Aqua.

Since we have all our estimates

and combined estimator set up,

we then proceed to setting up a Block Model.

And this is where we view, validate, and report.

The validation Swath plots, and the resource reports

are all embedded within the Block Model.

And so when the Block Model updates,

these ancillary functions update as well.

In this case, we’re using a Sub-Block Model.

We can do regular models, which is simply a new block model.

It’s exactly the same, except in for a sub-block model,

we have sub block count,

which is the parent block size divided by the count,

so in this case, we’ve got 10 meters, parent blocks,

and two meter sub-blocks in all directions.

Sub-block models can be rotated in both deep and estimate.

Sub-blocking triggers is important.

So for us, we are bringing across the meshes of each vein.

So if we add a new vein,

we would need to bring that across to the right-hand side.

And then the most important tab is the Evaluations tab.

Anything that we want to see in the Block Model or run

needs to come across to the right-hand side.

So in this case, we are running the kriging

for Pass 1 in each of the veins,

and I’ve got the combined estimator.

Now I do not need to bring across the individual components

if I have a combined estimator,

it’s just that I want to look at the underlying composites

against the estimate in a Swath plot,

I can only do so with the underlying individual run files,

not the combined estimator,

but so for validation, I do need the individuals.

And normally I would just look at Pass 1,

but if I’m just reporting,

I don’t not need these and I can remove them

and take them across to the left

and would just have the combined estimators.

So whenever a new parameter is set up

in the Estimation folder,

then you must go to the corresponding block model

and move it across to the right

on the evaluation tab for it to run.

To visualize our results,

we have got all of the evaluations

that we’re told it to use.

This is our combined estimator, so we can see,

I can see the results of all four veins,

and there are some other tools here against H parameter.

We’ve got this status.

So if we have a look at Vein 1 by itself,

that’s that just shows the results for Vein 1,

and we can look at a status and this shows us

which blocks were estimated and which are not.

So we still have a few blocks around the edges

or from data that aren’t estimating at this stage.

Anything that we stored in the outputs,

so that number of samples can be displayed.

So we can see here that we’ve got

plenty of samples in the middle,

but it gets quite sparse around the edges.

Another feature of block models is calculations and filters.

So what this one is doing is

we store all the possible variables

that we may use within the calculation.

And in this one, I’ve made a new numeric calculation,

which says if there is a grade in the block,

then use that grade,

but if there is not make it 0.01.

So this is just one way of trying

to remove blocks that have not been estimated.

The other way would be to

could set up a Nearest Neighbor

with a large search and append

that to the combined estimator at the very end.

So multiple ways of dealing with sparse blocks.

In this case our calculation

can be viewed in the Block Model.

Variables are parameters that can be used in a calculation,

such as something like gold price,

but you can’t visualize it in the model.

And then filters are simply query filters

to find as we would in drill holes,

but it’s applying to the actual block model.

So I’ve created this gold final calculation,

and it can be displayed because it shows up down in here.

And we can see all that grades above two grams.

Okay, so that’s the basic setup

of the very first Resource Model.

So coming back to here, we would then be ready to,

we have some new Grade Control drilling.

So we go to the Geological Model

where that drilling was upended, choose number three.

So we have a series of Drillhole, Grade Control,

infill holes that have been drilled as a fan

from underground.

Whoever’s in charge of working

on the interpretation.

We’ve gone through each of the drill holes, new drill holes,

and using the tool selection that had already been set up,

could reassign according to which veins

things are related to.

And then the model will update.

One handy thing to do is we have linking meshes directly,

but we can’t link drill hole information at the moment.

So what I would often do at this stage

is export all of the information

from the drill hole data set.

I could then load it to the data room

associated with this particular project.

(upbeat music)

So each project in the portal has files folder.

And within this folder, which it works similar to a Dropbox,

you could put the current drill hole database files,

and then we could download those.

And that will be ready to import into our estimation folder.

This portal is also where we can assign

users to the project, and each project

can have its own specific users.

So we go back to

this one and have a look at the users.

We can see that we’ve got 1, 2, 3, 5 users,

and they are all editors.

You can have people who are only going to be able to view

a few of the project and they would use central browser

to do that, but they cannot edit anything, okay.

So going back to, we have updated the geology.

So we go back to our estimation and we would open that up

and reload the new drilling.

So we have two options.

If it’s brand new drilling that doesn’t exist

anywhere in the project,

then we could use the append function.

So appending the drill holes, if you’re accessing

a main database and everything’s added to that,

then you could reload.

And that will put in all the new holes

and any changes, okay.

So the drill holes are updated.

And then within the meshes folder,

all the veins that might’ve changed need to be updated.

So we just simply go reload latest on Branch.

And then the Block Model will then rerun

and we are ready to report and validate.

Swath plots are maintained within the Block Model itself.

So to start with, we can create a new Swath plot.

Once it’s been created,

it is stored down on the graphs and tables,

and then we can open it up at any time

and have a look at them.

It automatically creates the X, Y and Z directions.

And we can add as many or few items as we like

using the selected numeric items.

In this case,

I’m wanting to compare the kriging in Pass 1.

And I want to look at the original composites.

So to do that, I need to turn them on down here.

So you can see that I’ve got

the clips composite values showing in the red.

If I had an inverse distance estimator, I could add that

and then I could compare the kriging

against this inverse distance.

Okay, we’re ready to report against our Block Model.

The key to reporting within Leapfrog Edge

is that we need to have a Geological Model

with output volumes that can be evaluated.

So at this stage,

we cannot directly evaluate meshes.

So they need to be built into a Geological Model.

So in this instance, I have a GM domain,

which is simply taking the meshes directly,

as we can see using the new intrusion from surface,

and that builds them up.

It’s also a good way of being able to combine domains.

So if for instance, we wanted to estimate

Vein 2, 3, and 4 as a single domain,

then we could basically assign

whatever name is in the first lithology.

So I could call this kriging Zone 2,

and I can still leave this as Vein 2 here,

but I just use kriging Zone 2

as the first theology for all three,

and then my output volumes,

I would just have kriging Zone 2 and Vein 1,

I could have a single vein for that one.

So it’s a great way of being able to combine

our domains together to run estimates.

We often use classification and often we’ll have shapes

maybe generated in another software,

or we could use polylines within Leapfrog to create a shape.

In this instance,

I’ve just done a quick distance buffer

around the drill holes, and I have referenced that.

So then I end up with two output volumes

for a measured and an indicated area.

And the rest would be inferred.

The same for any stoping panels.

If we wish to report within shapes or drive shapes,

they need to be built into a Geological Model first.

So in this instance, I’ve generated some panels.

So we’ve got

panel 1, 2, 3, 4, 5, okay.

I needed this to be combined against mineralization shape.

So I’ve actually used the combined model

and I have the combined.

So I have the Stipe panels confined to Vein 1,

which I will report against.

So resource reports again

built within the Block Model itself.

So new resource report,

you can have as many as you wish stored.

And then once they’re there, you can open them up.

(upbeat music)

And in this case, I’ve got the domain Vein 1 Pass 1,

domain Pass 2 and measured indicated inferred.

And we’ve got Pass 2 is only inferred.

And for each of the panels.

These can be moved around

so if we wanted the classification

to be first measured indicated,

and then there’s two components to the inferred.

We can apply a cutoff as we have done up here,

and SG can apply it either as a constant value,

or if you’ve got a different SG per domain,

and you can set up a calculation to do that,

or if you’ve estimated SG,

it would be available within the dropdown here.

You can choose which columns you wish to look at.

So if you don’t want to see the inferred,

you could untick that one there.

Then the second report that we’ve generated

is one where we’re looking

at the results for all four veins.

So we’ve got Vein 1, 2, 3, and four,

and just per panels without any classification.

So you can mix and match the resource reports

to whichever you want to look at.

So the process would just continue

as the next stage of geology

could add in some more mapping or drilling update,

the Geology Model would then open up the estimation again,

make the changes, and then you will publish this back.

So once the change is made, you publish that here.

These are the objects that can be viewed in Central Browser,

but regardless of what you tick here, everything is stored.

You can then assign it to a value like Peer Review.

It’s going to store the entire project.

And then you choose which branch,

because I’m working on an earlier model,

I have to make a new branch,

but if I had been working from the last one,

it would automatically keep assigning it

to the Estimation Branch.

It’s always useful to put some comments in there

for what he wants to do.

And then usually only takes

a couple of minutes to upload to the server.

Thank you for your attention today.

(upbeat music)