Let us show you how Leapfrog Edge can simplify your resource estimates with dynamic links and 3d visualisation, putting geology at the heart of our estimate. We will also demonstrate a dynamic grade thickness workflow.
Senior Geologist Business Development – Seequent
<v Peter>Good morning or good afternoon everyone</v>
from wherever you’re attending.
I’m a professional geologist
with a few decades of experience
in mineral exploration and mining.
And I’ve focused mainly on long-term
mineral resource estimates in a variety of commodities,
diamonds, base metals, copper, nickel, precious metals
and PGEs and through a variety of deposit styles
in North and South America and Asia.
The bulk of my experience
was spent at the Ekati Diamond Mine,
where I contributed to a team that did the resource updates
on 10-kimberlite pipes up there at the time.
And I was with Wood, formerly Amec Foster Wheeler
for nine years in the Mining $ Metals Consulting group.
I’ve been at Seequent now since October, 2018,
so just over two years
and I am on the technical team
supporting business development training
and providing technical support.
And I think a few of you
will probably have communicated with me in that respect.
And I do focus on the Leapfrog Edge
resource estimation tool in Geo.
So we’re going to cover
basically the estimation workflow in Edge.
So we’ll start with exploratory data analysis
and compositing, we’ll define a few estimators.
We’ll create a rotated sub-blocked block model.
And this is kind of the trick for doing
the grade-thickness calculation in Edge.
We’ll evaluate the estimators, validate the results.
Not thoroughly, but we will do a couple of checks
and then we’ll compose and evaluate
our grade-thickness calculations
and review the results.
And then of course,
our work is intended for another audience
and I’m picturing that the rotated
grade-thickness model will go to
engineers who will re-block it
and use it for their mine planning.
So I’m just going to stop my camera for now
so that, there we go, turning it off and we’ll carry on.
Now, this grade-thickness in Edge is one way to do it.
We also have presented in the past,
most recently at the Lyceum 2020 Tips & Tricks
with Sarah Connolly presenting
and we’ll provide a link for you to this recording
that was done last fall.
So this is a way to do grade-thickness contouring in Geo,
if you don’t have Edge.
All right, now I’ll flip to the live demonstration.
And we’re starting off with a set of veins.
There are four veins here, quite a variety of drill holes,
not a lot of sapling, but that’s kind of common
with the many narrow vein situations
because it’s difficult to reach them.
Now let’s see what else we’ve got here.
So that’s all of our veins and all of the drill holes.
I’m going to load another scene,
which will show us what we’ve got for vein 1.
And it looks like I must have overwritten that scene.
So I’ll just turn off some of these other veins.
So here’s vein 1 with all of the drill holes.
So let’s filter some of these for vein 1.
So those are just the assays that we have in vein 1
and it’s pretty typical that we’ve got clustered data,
so they’ve really drilled this area off quite well
with some scattered holes out around the edge,
maybe chasing the extents of that structure.
So in other words, we’ve got some clustered data here.
So that’s a quick review of the data in the scene.
Now I’m going to flip to just looking at the sample lengths,
the assay interval lengths because those will help us
in making a decision on what composite length to use.
So this is the first of our EDA in the drill hole data.
And I’m going to go to the original assay table.
I do have a merged table that has the evaluated
geological model combined with the assays,
so we can do some filters on that.
But let’s just check the statistics on our table.
So at the table level, we have multivariate statistics
and we have access to the interval lengths statistics.
So looking at a cumulative histogram of sample lengths,
we can see that we have about 80% of the samples
were taken at one meter or less
and then about 20 are higher
and there’s quite a kick at two.
So it looks like, well, if we look at the histogram,
we’ll probably see the mode there,
a big mode at one,
but then it’s a little bumped down here at two meters.
So given that we don’t want to subdivide
our longer sample intervals,
which would impact the coefficient of variation, the CV,
it might make our data look a little bit better than it is,
so we’ll composite to two meters.
And that way we’ll get a better distribution of composites.
And we are expecting to see some changes,
but let’s look at what we’ve done for compositing.
I have a two-meter composite table here.
Just have a look at what we’ve done to composite.
So I’ve composited inside the evaluated GM,
that’s the back flag, if you want to think of it,
the back flag models, compositing to two meters.
And if we have any residual end lengths, one meter and less,
they get tagged or backstitched I should say,
backstitched to the previous assay intervals.
So that’s how we manage that
and composited all of the assay values.
So we have composites.
So let’s have a look at what the composites look like
in the scene then, I think I’ve got a scene saved for that.
No, I must have overwritten that one too.
I was playing on here this morning
and made a few new scenes,
I must have wrecked my earlier ones.
So let’s just get rid of the domain assays.
Click on the right spot and then load the composite.
So we have gold composites,
pretty much the same distribution.
And if I apply the vein 1 filter,
we’re not going to see too much different either.
Turn on the rendered tubes and make them fairly fat
so we can see what we’ve got here.
So there’s the composites.
Now we do have, again, that clustered data in here
and we have some areas where we had missing intervals
and now those intervals have been,
composite intervals have been created in there.
So I think the numbers of our composites have gone up,
but that’s a good thing really,
because we’ve got some low values in here
now that we can use to constrain the estimate.
Well, let’s just check the stats on our composites.
So we have 354 samples,
with an average of 1.9 approximately grams
and a CV of just over two.
So it does have some variants in this distribution
and it looks like we’ve got some outliers over here.
I did an analysis previously,
earlier on and I pegged 23 grams as the outliers.
If I want to see where these are in the scene
and you may know already that you can interact
from charts in the scene, it applies a dynamic filter.
So I’ve selected the tail in that histogram distribution,
and it’s filtered now for those composites in the scene
and I can see the distribution of them.
Now, if they’re almost clustered,
which means we might consider
using an outlier search restriction on these,
but I chose to cap them.
Generally speaking, if you have a cluster of outliers,
it may reflect the fact that there’s a subpopulation
that hasn’t been modeled out, so you can treat it
with a special outlier search restriction.
I’ll just get my vein 1 filter back here.
All right, so it was predetermined
that we would use vein 1 as a domain estimation.
So the next step in the workflow
is to add a domain estimation to our Estimations folder.
If you’re a Geo user, you won’t see estimations.
It’s only when you have the Edge extension
that you see this folder
where you can define the estimations
that you want to evaluate onto your block model.
So let’s just clear the scene on this one.
And I think I do have a scene ready to go for to this.
So let’s go to saved scenes.
Here’s my domain estimation using saved scenes,
just saves a few clicks.
Well, not a lot has changed from what we were looking at
in terms of composites, but you’ll see now
that where we had intervals in the drill holes before,
now we have discrete points.
So those reflect the centroids
or the centers of the composites.
And let’s have a look at, that’s the 3D view.
And let’s just check that the stats still look okay.
So go back up to the Estimation folder to the vein 1.
I guess I can show you what the boundary looks like too.
So I’m just double clicking on the domain estimation.
It’s calculating a boundary block
showing us the average grade inside the vein versus outside.
And what we’re paying attention to here
is what’s going on across the boundary.
And in this case, there’s quite a sharp drop in grade.
There’s a high grade contrast.
So I will use a hard boundary.
Now if this was more gradational,
if the boundary was a little bit fuzzy, for instance,
I could use a soft boundary and share samples
from the outside to a specified distance.
And this is calculated as perpendicular
to the triangle faces, so it’s nearly a true distance.
But I am going to use a hard boundary, just cancel that.
And we’ll start looking at some of the other things here.
So we’ve got the domain and the values loaded.
I also did a normal scores transform
in an effort to calculate a nicer variogram,
but in the end, I didn’t go there.
But we do have the ability to do
a Gaussian Weierstrass
or discreet Gaussian transformation,
whatever you want to call it.
Anyway, so it’s a normal scores transform,
which can sometimes help to improve modeling,
calculating and modeling variograms
in the presence of noisy data.
I opted instead, and we’ll go the correlogram here.
So we’re looking now at our spatial continuity in our model.
I opted to go with a correlogram
because a correlogram will work better
in the presence of clustered data.
It is very good at managing to control outliers, noisy data,
and it also helps to see past the clustered data.
So the correlogram, it’s the covariance function,
which has been normalized to the mean of the sample pairs,
if you want to think of it that way.
So the covariance function is divided by the,
the mean squared of the data or the standard deviation
of the data to normalize it to the mean.
So we can see in our map
that there is quite a strong vertical trend,
the pitch actually is the vertical
and the correlograms are displayed in their true form,
which is inverted to how we usually display
Anyway, so that’s why these curves are upside down.
And I have managed to apply reasonable models, I think,
to the experimental variograms, so if we look in the scene,
we’ll see our variogram ellipse.
It’s not very big, so the continuity isn’t great.
But it does look reasonable together with the data.
So that is the go ahead variogram.
And the next thing we want to look at is declustering.
So clustered data will give you
typically an overstated naive mean of your samples.
So we use different declustering methods.
Leapfrog provides a moving window declustering.
You can also use a nearest neighbor model,
which gives you in effect like a 3D
polygonal volume type of a weighting.
So the samples are weighted by the inverse
of the area that they have around them.
And so these outline samples get more weight
than the closely spaced,
typically higher grade clustered data
where we’ve drilled off the higher grade
portion of the vein.
And so the declustered mean
is typically lower than your naive mean.
So let’s have a look if that’s the case
with our declustering tool in Edge.
So here’s our distribution.
And you can see as the moving window relative size
increases, that the average grade comes down
until it hits kind of a trough
and then it goes back up again.
And I suppose if you used a search ellipse
that was the same size as your vein,
it would come right back up to the input.
So the input mean, and this mean is a little bit different
than the 1.91 that we saw earlier.
So this is the mean of the points,
of the data rather than the length weighted intervals.
So it’s just slightly different,
but very close, 1.89, roughly.
And if I click on this node,
we’ll see that the declustered mean is 1.66.
So file that number away for later.
That’s our target.
Moving on to estimators,
and I defined three different estimators.
There’s an inverse distance cubed,
a nearest neighbor and a Creed estimator.
And with the CV up around two,
I was expecting that I would need
a slightly more selective type of an estimator
and that’s why IDCUBE.
And we’ll see what the results are comparing the Creed
to the inverse distance cubed.
Now I did a multi-pass strategy as well,
and that again, is to address the clustered data.
So the first pass search,
I’ll open up the first pass search here
using the variogram and the ellipsoid
is to the variogram limit.
So it looked reasonable as a starting point
for a first pass.
And I have set a minimum number of samples of 14
and a maximum of 20
and a maximum samples per drill hole of five.
That means that I need three holes on my first pass
to estimate a block.
And the block search isn’t very wide,
so I’m trying to minimize the negative Creeding weights.
So as the passes go, then the second pass uses a maximum
or two holes to estimate the block.
And then finally pass three, I’ll just show you,
is pretty wide open, big search
and no restrictions on the samples.
So even one sample will result in a block grade estimate.
So the idea here that this is just a fill pass,
making sure that as many blocks as possible are estimated,
and I use the similar strategies for the same sorry,
same search in sample selection for the IDCUBE.
So what does this look like when we evaluate it in a model?
Well, I guess first off, we’ll build a model.
Now I have one built already,
but as I mentioned, the kind of the trick
to doing the grade-thickness in Edge
is to define a rotated sub-block model.
So let’s see what that looks like.
A new sub-block model,
big parent blocks.
The parent blocks are scaled to the project limits
and it thinks that I need to have huge blocks
because the topography is very extensive,
but it’s not the case.
Now I’m going to use a 10 by 10 in the X and the Y
and the 300 in Z and when I’m done,
I will have the, well, the next step actually
is to rotate the model such that Z is across the vein
and that way, with a variable Z
going from zero to whatever it needs to be,
it will make like an array of blocks
that kind of look like little prisms.
There’s a shortcut to set the angles of a rotated model,
and that is to use a trend plane.
So let’s just go back to the scene,
align the camera up to look down dip of that vein
and I’ll just apply a trend plane here.
That’s got to be about 305.
I’m just going to edit this,
305 and the dip, 66 is okay.
I don’t need to worry about the pitch for this step.
It doesn’t come to bear
when I’m defining the block model geometry.
So now, I will set my angles from the moving plane.
And now that model, it’s a little bit jumped off
to the side there.
It is in the correct orientation now
at least for that vein, where am I?
Oh, my trend plane isn’t very good.
Let’s back up here.
I’m going to define a trend plane first
and then it’ll create a plane.
To the side, 305
and dipping, 66 I think was good enough.
Now a trend plane and let’s get back
to this block model business.
New sub-block model,
10 by 10 by 300.
I want to make sure that I go right across the vein
wherever there are any undulations.
And I will just have five-meter sub-blocks.
So the sub-block count two into 10
gives me my two five-meter sub-blocks
and let’s set angles from moving plane.
Now it’s lined up to where that vein is.
There’s a lot of extra real estate here that we can get.
We can trim that by moving the extents
a little bit back and forth.
Of course, the minimum thickness of that model
in the Z direction is going to be 300
because that’s what I have set my Z dimension to be.
One more tweak and that’s roughed in pretty, pretty good.
Of course, if you had an open pit,
you would have to make it a little bit bigger,
but this is going to be an underground mine.
So that’s aligning it to the model
and you can see by the checkerboard pattern
that Z is across the vein.
And then after that, I would set my sub-blocking triggers
and devaluations and carry on.
Now we already have a model built.
So I’ll just click Cancel at this point
and bring that model into the scene.
Well, the first thing we could look at
is the evaluated geological model.
Oops, the evaluated geological model
and that is filtered for measured and indicated blocks,
but there’s the model.
And if we cut a slice right along the model trend
slice and cross the vein
and then set the width to five and the step size to five.
And this can be 125.
Now I am looking perpendicular to the model.
And if I hit L, on the keyboard to look at that model,
I should be able to see those prisms
that I was looking for.
Yeah, they’re all kind of prisms.
We can see this model isn’t very thick or tall,
it’s only about five meters or less.
And I don’t see any breaks in the block.
So that means that my Z value at 300 is pretty good.
If I would have used a Z at, let’s say a 100 meters,
I may have had some blocks being split,
but I want only blocks that are completely across
that vein in the Z direction.
So let’s turn off these lights, we’ve got our model
and of course we’ve evaluated the GM
and I set the sub-blocking triggers to the GM as well.
Now I’m just going to my cheat sheet here
to see what I also want to show you.
So at this point, yeah,
let’s have a quick look at some of the models.
So that’s the geology.
I created, sorry, I created a combined estimator
for the three passes for the inverse distance cubed
and the Creed estimator
so that I can combine all three passes.
Actually, I better show you what that looks like.
So in the combined estimator,
I just double clicked on it to open.
I selected passes one, two, and three in order.
It’s important because as a block is estimated,
it doesn’t get overwritten by the following passes.
So if I were to put pass three at the top,
of course, everything would have been estimated
with pass three and pass one and two
wouldn’t have had any impact on the model at all.
So yes, hierarchy is important and it is correct.
So let’s just have a look at that model.
So there’s the Creed, here’s the Creed model
and it’s not bad, but I can see,
I would have to tune it a little bit better.
I think there’s some funny artifact patches of blocks
and things, may or may not be able to get rid of those.
And there are big areas around the edge
that have just one grade.
So that’s kind of reflecting
the fact that there’s not a lot of data out there
and that third pass search basically estimating the block
with that one pass.
Let’s see what the IDCUBE model looks like.
That’s a much prettier model, I guess
because the thing is how does it validate?
And we will check it against the nearest neighbor model,
which is kind of a proxy to a declustered distribution.
Even though we will have more than one sample per block,
it does kind of emulate or is a proxy for
properly declustered to distribution.
Anyway, let’s go back to the IDCUBE.
Another thing that we can do,
and that is to restrict our comparison
within a reasonable envelope
around the blocks that are well-supported.
So this is basically, where does it matter?
Like it doesn’t matter so much around the edges,
we’re expecting a little bit of error out there.
But if we define a boundary
around that are of the well-drilled region,
and it’s showing in there.
There’s my well-drilled region.
So I’m calling this my EDA envelope.
I’m going to do my validation checks in that envelope.
They’re going to be much more relevant
than just having everything on the outside
that is inferred confidence or less, let’s say.
Okay, back to the model and let’s check our stats.
So going to check statistics, table of statistics.
And I want to replicate the mean of the distribution
with my estimates.
And you’ll recall that the declustered mean is 1.66,
the nearest neighbor model is also very close to that
within a percent, 1.689, and the CV is almost the same
as it was for our samples, which was two.
So that nearest neighbor model,
again, reasonable proxy
for the declustered grade distribution
and very useful for comparison.
The IDCUBE model comes in quite well.
Well, it’s a little bit off, but not bad, 1.73, roughly.
So we’re replicating the mean of our input distribution
with the IDCUBE.
For some reason, we’ve got higher grades
in the ordinary Creed model than we do in our samples.
So that’s kind of a warning sign
and it is very much smoothed,
it’s .65, a CV of .66, which is much less than two,
so it’s probably overly smoothed.
And without doing any additional validation checks,
I’m going to use my IDCUBE as the go-ahead model
and yeah, carry on from there.
Now let’s see.
Oh, I didn’t mention it,
but yeah, I did do variable orientation.
Funny how you can miss stuff when you’re doing these demos.
I did do a variable orientation,
which is using the vein to capture the dynamic
or locally varying iroinite in the estimation,
our implementation of variable orientation in Edge
changes the direction of the search
and the direction of the variogram,
it doesn’t recalculate the variogram.
It just changes the directions
and applies that to the search
so that we get a much better local estimate
using the variable orientation.
Okay, so moving right along,
and the next thing is to get into the calculations
because that’s where we do the grade-thickness.
So I’m going to double-click on Calculations,
it’ll open up my calculations editor.
I better show you the
panel with the tools
Where are my tools?
I’ll maybe open it in another way here.
‘Cause I have to show you that panel.
Calculations and filters
so there should be a panel that pops out here
that we can see the metadata that is used
or the items we can select, go into the calculations
and our syntax buttons.
And isn’t that funny?
It’s not being active for me.
Well, let’s have a look at the calculations anyway,
because the syntax is sort of self-explanatory.
So I did do a filter for the,
let’s expand, collapse that.
So I did do a filter for my measured in indicated,
which is within the EDA envelope.
So that was my limits.
I also did some error traps that found empty blocks
and put in a background value.
They didn’t get estimated.
So if it’s the vein and the estimate is normal,
then it gets that value,
otherwise it gets a low background value.
And I did that for each of my models.
I also calculated a class, category calculation for class.
So if it was in the vein and it was in the EDA envelope,
and within 25 meters, then it gets measured,
otherwise in the EDA envelope, it’s indicated.
So I did contour
the region of 25 to 45 meters
and then drew a poly line.
And that was what formed my EDA envelope.
And then outside of that, if it’s inferred
or if it’s still in the vein
but beyond the indicated boundary,
or my EDA envelope, then I just called it inferred.
So there’s also, I did for the statistics,
I did also create calculated,
numeric calculation of the measured and indicated blocks,
two those were just for more comparisons.
But finally, finally, we’re getting to the thickness.
So the thickness is pretty straightforward
because we have access to the Z dimension.
So all I had to do for thickness
is say, if it was in the vein,
then give the thickness model the value of the Z dimension,
otherwise it’s outside.
And then the next step after that
is to do a calculation, very simple.
If that block was normally estimated has a value,
in other words, then we just multiply our thickness
times the grade of that final model.
And I used the IDCUBE model, and that’s that.
So we can then look at these models in the scene.
Any calculation that we do can be visualized in the scene.
So let’s have a look there, see the thickness,
so you can see where there might be some shoots
in that kind of orientation.
And if we multiply, sorry, grade times thickness,
we can see, yeah, maybe there are some shoots
that we need to pay attention to,
maybe target some holes down plunge of these shoots
to see exactly what’s going on.
And as I mentioned, that model exists,
well exists, we can now export that model
to give it to the engineers.
So let’s just go to our model.
What does that look like?
let’s call it Sub-block Model Rotated,
and we can export just the CSV file
that has all of the information in top of the file,
a CSV with a separate text file for that metadata,
or just points if you just need the points
for maybe contouring or something,
but I’m going to select that CV output format.
Well, I’ve already done this, so it is somewhat persistent.
It remembered which models I had exported
and then applying a query filters
so I’m not exporting the entire model,
just the one for vein 1
and ignoring rows or columns
where all of the blocks were in air condition or empty.
And then I can use status codes, either his texts
or his numerics, carry on here
and pick the character set.
The default usually works here in North America,
and there’s a summary and finally export.
There aren’t a lot of blocks there,
so the export actually happens pretty quickly.
So let me see what else I’ve got here.
Yes, okay, so the filter,
I just want to show you in the block model,
I did define that filter for the vein
1 measured and indicated.
So that is kind of the view again,
where the blocks are filtered for what matters.
And that’s actually is, that’s the workflow.
And I hope I’ve covered it in 30 minutes or less,
and the floor is now open for questions.
<v Hannah>Awesome, thanks, Peter.</v>
That was really good.
I even learned a couple of things.
I love when you sprinkle breadcrumbs of knowledge
throughout your demos.
Right, so we’ve got some time for questions here.
I’ll give everybody a moment to type some things
into that questions panel, if you haven’t done so already.
I’ll start, Peter, there’s a couple of questions here.
So the first one,
how can you view sample distance on a block model?
<v Peter>Okay, maybe I’ll go back to Leapfrog</v>
for that then.
So sample or average distance and minimum distance
are captured in the estimator.
So let’s just go up to an estimator.
Estimator, I’ll use the combined one.
And in the outputs tab, if I want to see the minimum
or average distance, I can select those
as the output number of samples as well.
So with that one selected,
I should be able to go to my evaluated model.
There’s my combined ID3 estimator, there’s average distance.
So there’s a map of distance, to samples
and each block, if I click on a block,
I can actually go right to the absolute value.
And you can export this too if you need that kind of thing
in the exported block model.
<v Hannah>Okay, thanks, Peter.</v>
how can I find that grade-thickness workflow
on drill holes?
I can actually just paste that into the chat here.
I’ll paste that hyperlink.
So that was our, we had a Tips & Tricks session
as part of our Lyceum. I’ll put that in the chat here.
Okay, another question,
we saw you pick your declustering distance in the plot.
Is this typically how all
<v Peter>I missed the question Hannah.</v>
<v Hannah>We saw you pick your declustering distances</v>
or distance in the plot, is that typically how
all declustering distances are selected
or can you speak more about declustering distances?
<v Peter>Well, generally speaking,</v>
when we’re doing declustering,
we’re targeting distribution
where the area of interest has been drilled off more
than outside and consequently,
the naive average is higher than the declustered average.
So that’s why I’m picking the lowest point here,
the lowest mean from the moving window relative size.
And so it’s,
now that isn’t always the case.
It could be flipped if you’re dealing with contaminants.
In that case, you might find that your curve is upside down
or inverted with respect to this one,
and you would pick the highest one.
So I have seen that a couple of times.
I don’t have a dataset that I can emulate that,
but this is typically where you’re picking
your decluttering mean.
Did that answer the question?
<v Hannah>I think so, yeah.</v>
<v Hannah>Another question just came in.</v>
I know we’re past our time here,
but I do want to squeeze these out
for anyone who’s interested so.
Thanks, great presentation.
Is there a limitation to the model size
for import or export using Edge?
I guess we mean block model there.
<v Peter>Yeah, there doesn’t appear to be a hard limit</v>
on most block sizes.
However, I should qualify that the current structure
for the Edge block model does not support
the importing of sub-block models.
So while you can export a sub-block model,
you can’t import one, which is a bit of a limitation
until we fully implement the octree structure,
which is similar to some of what our competitors
that use sub-blocked models as well.
But I know there are people out there
that have billion blocked block models
that they’re working actively within their organizations,
mind you they’re very cumbersome at that scale.
<v Hannah>Right, okay.</v>
Well, that wraps up our questions.
We’ve got another one who says thank you.
So thanks again, Peter.
That was awesome.