This month we will be looking at tips to consider for working with Leapfrog Edge.
With Leapfrog Edge you can dynamically link to your geological model and perform all the key steps to prepare a resource estimate and validate the results. The webinar will focus on the major benefits of using Leapfrog Edge and what concepts to consider before you start.
Join Seequent’s Jelita Sidabutar (Project Geologist) and Steve Law (Senior Project Geologist) who will discuss:
- What is Leapfrog Edge and why is it beneficial to your work processes.
- Preparation Tips – Validate your data.
- The importance of domaining – the link between your Geology Model and the Resource Estimate.
- Estimation Strategies.
Please note, this webinar will be run in a combination of Bahasa and English
Project Geologist – Seequent
Senior Project Geologist – Seequent
<v Jelita>(speaking in foreign language)</v>
I’m now handing over to you, Steve.
Thank you very much.
<v Steve>Thank you Jelita for that introduction.</v>
The next section we will cover the concepts
that we should consider whilst creating a resource estimate.
At the same time, we will see
and demonstrate the capability of Leapfrog Geo
and Leapfrog Edge.
The first section I’d like to look at
is preparation of your data.
This includes viewing and visualizing our information,
checking that it’s in the correct spatial location
and understanding the impact of missing or no values.
This model represents an epithermal gold underground mine.
We have a series of underground
and surface drill holes
and a geological model has been developed
with several different load structures.
The first thing we should do is check that spatially,
relates to things such as typography
and underground development.
You can see here that the colors of the drill holes
all seem to go to the surface and don’t go above it.
And if we have a look underground,
We can see that the drill fans of underground
do seem to match up with the underground drill (indistinct)
that are being mapped in by survey.
So that’s a good starting point.
We can run a series of statistical evaluations
to check our data.
The first part of that is looking at our data as a whole,
before it has been domained.
I will look at the gold data set for this.
Leapfrog has a function
to be able to check errors on loading.
You’ll see here that over here
in the drill whole essay table,
there are little red indicators on each of the elements
except for gold.
Each of these means we should check and we have a function
at the database table level called fixed errors.
If I click on the gold, this one is been reviewed,
which is why there is no little red symbol,
but what it shows us is that in the data set,
it will show us any missing information.
And also any below detection values.
These could be less thans or negatives,
or just default values.
We can decide whether we want to omit missing information
or replace it with a different value.
So in this case, we’re omitting missing data,
but we’ve got some negative 0.01, 0.02, 0.05,
and we have the capability of giving a different rule
for each one.
Otherwise if we don’t define
a value for each different figure,
it will just replace all of them with 0.001.
Once these rules have been put in place,
we tick this little box up here and hit the save button.
And then that will apply going forward.
If we reload our data,
those same rules will be still in place.
You don’t have to do it each time you change your data set.
You would then go through and do the same thing
for the other elements.
As we’re not using those today.
I don’t need to do that.
There is always a default value.
So in this case, I’m omitting missing data.
And in this case,
keeping positive values would be the default.
If you didn’t come to do that.
So ideally we would always generally want to replace
non-positive values so that they don’t cause issues
in the underlying statistics.
When we hit save, then it processes those changes.
We can also look at a histogram.
If we look at a histogram of gold by itself,
we can see a histogram of the data.
And we can see that there are no negatives.
So everything is zero at the minimum.
And we’ve got…
Gives us an idea on the scale of our data.
So this goes up to 443 grams per ton.
Leapfrog histograms are interactive with the scene.
So as long as I’m displaying this table in the scene,
if I wanted to see where all of my values
above one gram were, I can highlight my histogram
and in the scene view,
I can see that they’re all values above one.
So this is just a filter
that’s applied interactively on the scene.
It’s not one that we can save.
I come back to here.
I can turn the histogram off and all my data will come back.
we’re wanting to look at a domain level for our statistics.
So in this case,
we’ve got a vein system that’s been developed
in the geological models.
And I also have built a geological model
where I’ve split the veins into separate pieces.
With (indistinct) based from the underlying vein system.
So that if I update my veins here,
that will flow through automatically to this section.
So I can evaluate any geological model
back to my drill holes.
It’s basically flagging the codes into the drill hole space.
We do that by right clicking on the drill holes
and doing a new evaluation.
That then lets us pick a geological model
and it creates a new table.
In this case down here, estimation domains, evaluation.
I’ve then merged that using the merge table function
with the essay table,
and then I’m able to do statistics on that.
So I’ll do a box plot.
And this will show me the global…
The gold broken down by domain.
So we can see here that four of them have similar values,
but then 4,400 is a lower grade vein.
That has a mean somewhere below one gram.
Sorry, of around two grams.
We also can do compositing within Leapfrog Geo.
This is all done underneath the new numeric composite.
And what we’re able to do is we can do it against
the subset of codes.
And this allows us to choose in this case,
our estimation domains.
And at the same time, we can do compositing
for all the different domains.
And we could choose to do all of the elements.
At the same time.
We can apply different rules to each domain.
In this case, I want to use the same rules,
but if I had a domain where I wanted to use
2 meter composites, rather than 10 meter composites,
I could do that here.
Now in the real ones that I’ve done,
we are looking at one meter composites
across all the domains.
And if I have less than half a meter remaining
of when it hits the domain boundary,
then it’s going to distribute the piece
across the whole domain.
We have a concept here called minimum coverage.
This really only has an impact if we have missing data.
If there is no missing data
and you’ve got values throughout your mineralization zone,
that does not matter what you put in here.
The default value is 50%.
And generally you either choose 0%, 50% or 100%.
We have a really useful tool called drill hole correlation,
where you can examine the results of your compositing
and get a feel for how changing those different parameters
might affect the composites.
So I have a look at this one.
I’ve picked a particular hole.
Now this one had some missing data down
in the middle of one of the veins.
So if I wanted to have a look and see
what the different compositing would do,
so that it’s all one meter.
That the only change is that the coverage has been changed
from 0, 50 to 100.
And basically the percentage means
it’s relative to the length that you’ve chosen.
So if we just zoom in a little bit.
Okay, so if we have a look at down here,
there’s a gap in our data in through here.
With 0 and 50% coverage,
we still get a composite being placed in through here.
You can see here that that 1.7
has been taken over into the composites.
But if we use 100%,
which basically means that I need a full one meter
to be keeping it,
then it’s not keeping that last one meter piece.
The other thing we can look at in this scenario too,
is have we done our domain correctly?
This one down here is clearly using this 11.25.
And in the original essays,
it’s obviously a combination of the 18.5
and some of this 0.36 has been included.
But we have a 97 gram essay interval
and that’s composited to 58
and it’s not included in the vein.
So that may be a misinterpretation at this level.
Well, maybe it’s another vein
that’s not being modeled at this stage.
But it’s just one way of looking at your data
and determining have you…
Is the interpretation honoring the data as best as it can.
These correlation tables can be all saved.
So you can have multiple of these saved as drill hole sets
under the drew hall correlation folder in through here.
The next stage in running our resource estimate
and equally is important as the data integrity
is the importance of domaining.
And this is the link between your geology model
and the underlying estimate.
And we have a series of tools
where we can do exploratory data analysis.
Leapfrog Geo and Edge relies on having
a robust geological model at that underlies the estimate.
When we are using Leapfrog Edge,
we have an estimation folder and within that,
we create a new domain estimation.
I’ve already got a series of them created here.
So we’re going to have a look at the estimation
for vein for 201.
So the domain is chosen from at a closed mesh
within the system.
So it can be anywhere in the meshes folder
or ideally within a geological model.
We can then pick up our numeric values
and they can either be from the underlying essay data,
and then we can composite on the fly.
Which is what I’ve done in here.
Or if we chose to use one of the composited files,
which are here,
then that compositing section would gray out.
The first thing it shows us is a…
Take that back to (indistinct).
Is a contact plot.
And this shows us the boundary conditions of the domain
and against what’s immediately outside it.
So we’re looking and right angles to our domain
in all directions.
And it’s showing us here that within one meter
of the inside, we’ve got high values.
So we’re getting higher essays above 10 grams per ton.
But immediately crossing the boundary.
We’ve dropped down to below five grams per ton.
Now some of these things could be modeled in other veins.
So we’re really just looking around the immediate contact.
If there is a sharp drop-off such as this,
then it suggests that we should use a hard boundary.
If not, we will use a soft boundary,
which allows us to use some of the samples outside
up to a certain distance, which we define
to be included in the estimate.
We are still only estimating within the domain itself,
but it’s using some samples from outside.
Now, how do we choose the domain?
There are lots of things to consider
and the domain, it can be the most important part
of your resource estimate.
If we get the geological interpretation wrong,
then our underlying estimate will be incorrect as well.
So some of the things that we should consider,
what is the regional and local scale geology like?
Do we have underlying structural controls?
There are different phases of mineralization
and are they all the same style?
At least some of it,
could there be early stage pervasive mineralization
cut across by later structurally focused
Faults and defamation.
How do these impact?
Are they syndepositional with the mineralization?
Are they all post or later and are cutting things up?
The continuity of the mineralization
and this flows through, into our variography studies.
Do we have very short scale continuity
such as we often get within gold mines?
Or is there a better continuity such in base metals
or ion ore.
Have there been over printing effects
of alteration and weathering?
Have we got super gene enhancement of the mineralization
up above the base of oxidation?
So Leapfrog Edge relies on having a closed volume
as the initial starting point.
We develop a folder for each of these
and we can look at the domain in isolation.
So we’re looking here at a single vein.
We can look at the values within that vein.
And we can do statistics on those.
So again, this is domain statistics now,
so we can see that we still have a skewed data set.
So ranging from 0.005, up to 116.5 grams.
The mean of our data is somewhere around 15.
If we wanted to consider whether we need to top cut,
we can first look and see where those values are.
So we can see here that there’s a few scattered values
within the domain.
They’re not all clustered in one area.
So if they are all clustered down, say down here,
then we might have to consider subdividing those out
as a separate little high grade domain.
But they’re not so unlikely to do that.
So another option is to look at top cutting.
One till we have for looking at top cutting
is the low probability plot.
Is there a space where the plot
starts to deteriorate or step.
Up here, we can see around about 90 grams.
There’s a cut up above that 98.7th percentile.
We could be a little bit more conservative
and come down towards 40 grams.
Down through here.
There’s no right or wrong on this.
And you would actually run the estimate
potentially at both top cuts and see how much impact
removing some of this high grade has
from the underlying estimate.
Within each domain estimation,
we have the ability to do variography studies
under the spacial models folder.
We can apply variable orientation.
So if our domain is not linear and we can see here
that this vein does have some twists and turns in it,
then we can use variable orientation,
which will align the search ellipsoid to the local geometry
to get a better sample selection.
And we can also consider de-clustering
under the sample geometry.
De-clustering is often when we are drilling a resource
from early days through the ground control.
In the early days,
we’ll start finding bits and pieces of higher grade,
and we tend to close our drills facing up
towards where the high grade is.
This could have the tendency of vicing
our drilling towards the high grade.
So the overall mean of the data in this instance
could be 15 grams (indistinct).
But the underlying real grade of that
could be a little bit less than that
because we haven’t actually drilled
all of the potentially low grade areas
within that domain.
So we can set up a sample geometry.
If I open up this one.
And what I’ve done here is I’ve aligned the ellipsoid
roughly with the geometry of the vein.
And I’ve looked at 40 by 30 by 10 meters.
This data set has a lot of drilling.
It’s got underground, great channel samples as well,
ranging from three meters apart.
And we can get up to 40 or 50 meters space,
drill holes in some areas as well.
So thinking about maybe if I indicated range was 40 meters,
I will see whether there’s an impact from de-clusttering.
A little graph here suggests that there is,
we’ve got an input mean of 14.96.
But the de-clustered mean suggests
that it’s closer to 14.
What’s the impact of making my spacing a little bit larger?
If I have a look at 80 meters,
I can see that potentially getting something
of a similar result.
To see what the de-clustered mean is
I can do the properties.
So I can see that my naive mean is 14.966.
And my de-clustered mean is 14.23.
If I look at slightly wider spacing,
my de-clustered mean is going up to 15,
but it’s very close to the naive mean.
So this suggests that overall,
there may be a little bit of impact of de-clustering,
but my overall grade
is definitely somewhere between 14 and 15 grams.
It’s not 12, it’s not 16 or 17.
In Leapfrog Edge you do have the opportunity
to apply de-clustered weights,
which this defines for each of these
during running an inverse distance estimator.
Often though I won’t use it directly
and just consider how it looks at the end.
When we’re looking at the statistics of our block model,
I’m hoping that the overall domain statistics of my blocks
will be similar to the de-clustered mean.
So if I get a bit of a variance
and instead of being close to 15, it’s closer to 14.
I wouldn’t be concerned.
But if I were getting a block estimate
of well about 15, say 15 and a half 16 overall,
or down around 13,
I’m either underestimating or overestimating
on a global scale.
And in that case I may need to change
some of my estimation search parameters.
One of our biggest tools of course, is variography.
We do have the capability of transforming our data.
So this is done generally for skewed data sets
and it helps us calculate a semi variogram
that’s better structured
than if we do not transform the values.
We simply right click on our values and transform them.
And that gives us the transformed values here.
If we look at the statistics of the transform values,
they simply are a log normal histogram.
So all the impact of the higher grades
is being compressed down into the tail.
So when we calculate our variogram
we’re not getting such wide variations between sample pairs.
And this gives us a better structured variogram to model.
If we have a look at the variogram for this one,
we are able to have the radial plot.
Now this radial plot is in the plane of the mineralization.
So we tend to set the plane up first.
And in this case it’s aligned with the underlying vein.
And then we model in the three directions
perpendicular to that.
The range of greatest continuity
is reflected by the red arrow.
And then the cross to that is the green.
And then across the vein itself,
the smallest direction is represented in the blue.
These graphs are interactive with the scene.
So we can always see our ellipsoid in here.
And if we move that it will move to the graphs.
Again, if we change this one here,
it will correspondingly change the ellipsoid
so we can see we’ve stretched it out in that orientation.
So we’ll take that back to where it should be.
Down through here.
We can see that my ellipsoid has as shrunk.
So we can always keep in contact
without our geology and our orientations
and make sure we don’t inadvertently
turn out ellipsoid in the wrong direction.
The modeling tools are all standard,
similar to what’s available
in most variogram modeling packages.
You can adjust your lag distance, your number of lags,
angular tolerance, and bandwidth as needed.
Once we’ve got a model that may fit,
we save it and we back transform it.
So the back transform is automatic.
Once we hit that button and takes us
from that normal score space back into our graded space,
before we run the estimate.
We can save any number of different variograms.
So I can actually copy this variogram here.
I can make some slight changes to it
and choose to use that one.
And I could run multiple variograms
and test to see what kind of impact they may have.
Up until this point we’ve been focusing
mainly on our diamond drilling information
that we’ve got for this particular domain.
Leapfrog now has the capability
of storing multiple databases within the project.
So I’ve got a diamond drill hole database,
and I’ve also got a grade control
underground channel database.
I can choose to have a look at the impact
between, is there a bias between the channels
and the drill holes.
At the moment, we need to export these
and join them together as points.
In an upcoming release, we will be able
to merge these things automatically.
Here I have one meter composites
of the diamond drill holes and the channels.
And if I do statistics on that, I can run a QQ plot.
I’ve used query filters to isolate the diamond drill holes
and the channels.
And we can see here that there is a bias
to the underground channels in turn,
that they tended to give higher grades
than the equivalent located diamond drill holes.
So that’s something to be keep in mind.
This is a real mind, and that was definitely the case.
And we had to consider applying different top cuts
to the channel samples versus top cuts
to the underlying diamond drill holes.
To help reconcile with the mill.
It’s a lot easier to tackle that kind of situation.
When you start getting production data.
Before that, you don’t really know whether channels
are more representative
than all the diamond drill holes are.
Again, we can look at that data.
So here we have our channel samples in there as well.
So we can choose to use all of that information
within our estimate.
We can run variograms with both data sets combined.
We can run it with just the channels
or with just the diamond drill holes
and see whether there is any influence
on the variography on the different types of data
that we’re using.
To understand whether there’s any impact
on adding channel samples into it
for the variography,
we can simply copy the entire folder.
We replace the underlying information
with the combined data set, which is through here.
And we can still see that there’s still
quite a big drop in the contact plots
and no real changes in the pattern there.
Our variogram for the drilling only
gave us quite short ranges, 25 to 15 and 4 across the vein,
which we were able to model
with a single exponential structure.
If we have a look at the transformed variogram again
for that combined channel data.
We can see that it’s quite similar.
So the sills changed a little bit.
It didn’t really have to change the ranges.
So it’s giving a similar result in both cases.
Once we have done our data analysis,
we’re ready to set up our estimation strategy.
There are various things that we consider.
What stage are we at in the project?
Are we in early exploration or pre mining stages,
or are we at grade control level?
If we’re a great control level,
we’re going to have a lot more information and data.
We may well be dealing with shorter ranged information
that we’ve got.
Whereas in the early stages, we don’t really know.
We might not have enough information to do kriging
or get a valid variogram.
In which case we may decide to use inverse distance
to some power.
The search strategy is an important concept.
And sometimes it can have more impact than the variogram
that we are working with.
There are lots of different theories
on what kind of search parameters we should set up.
Should we use lots of samples with long distances?
Should we use a small number of samples
to try and better match local conditions,
especially in grade control that can often be the case.
So it might use limited number of samples
over shorter search ranges.
There’s also the potential to use multi pass strategy.
Where you will use a larger number of samples
over a short distance to better reflect the local.
But then you still need to estimate blocks further away
and you will gradually increase the search distance.
And at the same time, you may or may not
change the number of samples.
If you’re trying to estimate far-flung blocks
with limited data,
well, then you’re going to have to have
a low number of samples, but a large search.
The issue with this too, though,
is you can get artifacts between the search boundaries.
Classic is where you will display the results,
and you can see the concentric ring of that grade
around each search limit.
So we don’t want to see that kind of thing.
So we would need to do something
to try and eliminate that kind of thing.
Kriging Neighborhood Analysis
is a way of testing a whole series of our variables.
Lowest number of samples, maximum number samples,
minimum number of samples.
Trying different search distances.
Discretization, another parameter that we could play with.
That depends more on our ultimate block size in our models.
All of these things can be tested quite easily with Edge
and it’s all really a matter of copying estimators
and then running them through.
We have a separate webinar that goes through this strategy.
We can set up either inverse distance,
nearest neighbor or kriging estimators.
We also have the ability to do
radial basis function estimator,
which is the same algorithm that is used in the modeling
and in the numeric models in Leapfrog Geo.
It is more of a global estimate and you don’t necessarily…
It’s not too bad to use if it’s difficult
to apply a variogram.
We’re going to focus on the kriging estimated today.
So to create a new one,
you just right click and got a new kriging estimator.
If you have one already set up like this,
you can choose either whether it’s ordinary or simple
and we deploy our discretization here.
And that’s generally related to our block size.
Any variogram that’s stored in the folder
can be selected from here.
And if we want to apply top cuts on the fly,
we can click the values at this point.
And this will apply only to this particular domain.
The most important part is setting up our…
Cancel that one.
It’ll open up when it’s already there,
is setting up our search.
Now, in this case, I’m using variable orientation,
but initially you would set it to the underlying variogram.
This is 4.281, and that will take in the orientation
and it takes the maximum ranges that you developed.
But you can choose to make this smaller,
or you can make them bigger.
It’s up to you.
If we use variable orientation,
which is using the local geometry of the vein in this case
to help with the selection, we simply tick that box here.
We do have to have a variable orientation set up.
In the search parameters,
obviously we’ve got the standard ones.
We’ve got minimum and maximum number of samples,
we can apply an outlier restriction.
This is more of a limited top cut.
So in this case, we could have applied an overall top cut
that may take out all the values, say above 100,
but then what we’re doing is saying that if we’re within 40%
of the search, that’s defined here,
then we will use the value.
And then if it’s further than that away,
we will start to cut that down.
So in this case, we’re using 60.
We can apply sector searches on a quadrant or opt-in basis.
And we have drill hole limit, which is maximum samples,
per drill hole.
Which is relating back to here.
So in this case it means we’d need at least two drill holes
to make an estimate.
So on our early passes,
we may have quite a few restrictions in,
but as we loosened it up,
if we’re doing a multi-pass strategy,
we may need to vary that down and take some of those away.
In this project, at this stage,
if we’re only looking at the diamond drill only set up,
we’ve got a domain destination for vein for 201,
and we’ve also created another one for vein 4,400.
Within those, we’ve got two passes of kriging
and I have got a inverse distance.
One to set up as a comparison.
And underneath the 4,400 domain
we have a three pass kriging setup.
And again, I’ve got an underlying
in the first distance to set up
as a comparison to the kriging.
We want to be able to join all this information together
at the end.
So we have a thing known as a combined estimator.
This is set up at the top level here, combined estimator.
And it’s like a merge table with the drill holes.
If we have a look at this one here.
What this is doing, it’s joining together the domains
so that we’ll be able to see both domains at the same time
in the scene and then it’s got the passes.
And it’s important that the passes are in order.
So pass one and two.
One, two, three.
The way Leapfrog works is that for each pass it reruns
and flags a block with a value for every block.
So you will get subtle differences
between pass one and pass two in some of the blocks.
This is telling it that if I get an estimate
on the first pass, then I will use it.
But if not, then I will use pass too.
If I haven’t estimated that block
after pass one or pass two with this phone,
then it will take the value for pass three.
And then the final value is displayed
using this combined variable name here.
At the same time that we’re storing this information.
It will flag for us what pass
that the estimate occurred on.
So that is this EST estimated here.
All of these parameters can be viewed in a parameter report.
And this gives us a nice table that shows us, for instance,
all the different search parameters
for each of the estimation domains that we’ve set up.
We can see our ellipsoid ranges.
This is a good way of checking everything overall
to see whether we might’ve made a mistake.
For instance, did we…
If there was a maximum value that has been set incorrectly,
if there is an issue here,
we can just simply click on that there
and it’ll open up the underlying estimator for us.
And we could go back and change that to another value.
So as soon as I changed the value here,
you’ll see that the table will update.
In the next release, this table itself
will be directly editable,
but for the moment it just opens up the underlying form.
So I’ll just change that back to 24.
And that’s done.
So you can see there that it’s reprocessing
there in the background, but we can still do things.
We can export this data to a spreadsheet,
and it’ll give you a overview of that.
It’s a formatted spreadsheet.
So that’s easy to take all this information
and put it into a report.
So here, I’m looking at the variogram tab.
So it’s giving me the name, the variants, the nugget,
the sills, and our ranges.
So all neatly tabulated, ready to put into a report.
Once we have set up all of our estimation parameters,
we are ready to work with our block model.
So in this case, I’m using a sub-block model
and we have a new format called Octree.
Which is similar to (indistinct) structure.
Basically it means that if we have a parent size,
then the sub-block size must be a multiple of two.
So 1, 2, 4, 8 up to 64 divisions of the parent block.
We can rotate in dip and as in any block model.
When we’re constructing our models,
we can see relative to our data.
I bring back in some of my values, for instance, here.
Bring in the drill holes.
So we just construct a single block model
for all of this data.
We can change the boundary at any time using the handles.
It shows us our sub-block size, as we zoom in there.
So these are the parent blocks and these are the sub-blocks.
It automatically triggers on anything
that we bring across to the side.
So here I’m wanting to store my host drop model,
my vein system and my estimation domains.
So each thing here will basically flag a corresponding block
with the codes.
And then I’m bringing across my estimators.
Now for final results, I only actually need
the combined estimator in here.
But if I want to validate against the swath plot
and have a look at the underlying composites,
I do need to bring across the original estimator itself.
Because this is where the information is stored
on what samples are being used and distances,
and average distance, et cetera.
This is simply taking the end result of each estimate
and flagging it into the block.
We also have the capability of flagging meshes
directly into the block model now.
And this is under a function called combined meshes.
That is done down here
in the combined models folder.
And it’s actually a grouped mesh.
I’ve done a grouped mess for the drives.
So I open that up.
All we need to do is select the meshes
directly from the meshes folder.
So I’ve added plus.
That will let me go.
In this case, I’ve got design stored as designs.
And I’ve got development drives in the meshes.
So any valid closed solid within the project
inside a geological model,
meshes folder or designs folder can be used.
It will automatically pick up the name
that’s been assigned to that object.
And this can be changed if you wish to change it.
And then that will be flagged.
Then evaluate it into your model
and then this will be evaluated.
So I’ll just kill that.
So to look at the results of our block model,
we just drag it into the scene.
And all the options of viewing,
all the different variables that we’ve stored down here.
The one really good one that we’ve got is looking at status.
So if we wanted to have a look at our inverse distance
of that vein 4,400, and we went and looked
at the status of that.
If I turn off outside, it shows me that the set up
I’ve got for the search.
The white blocks are estimated
and the purple ones have not been estimated.
So I may want to do a second pass to fill in these blocks.
If we look at the combined variable, for instance for
AU 4,400 and go to the est parameter.
It will color code the passes for us.
So we can see here that the blocks
that were estimated on the first pass are the pale blue.
Slightly darker blue for the second.
And for the third.
If I evaluate, if I highlight one of those blocks,
I can see here that the first pass,
I’ve got a value of 5.59 on the past two, it was 5.96.
A little bit different.
And then the third one looking at a much larger search
and less samples, it’s come up with 4.093.
But because we got I value on the first pass,
then the underlying end result is that first pass.
If we click on the second pass here,
we can see that there was no value assigned on pass one.
So it’s going to use the value from pass two,
which is 5.51.
You can see that’s the same vein down in here.
So we can get a good idea of how much of our resource
is being estimated on each pass.
We can quantify that within the block models.
We have the ability to build a series of reports
and they are stored within the block model.
So when any data is updated and the rebuttal reruns,
the reports will rerun for us.
So if I have a look at this estimation passes 4,400,
I can see that on the first pass,
this was the volume of blocks estimated,
then pass two and pass three.
So it’s approximately a little bit over 50%
was done on pass one.
I can also check the volume of the blocks
against the volume of the underlying wire frame,
because we want them to be fairly close to each other.
So in this case the block volume is total 574,000.
And if we have a look at the domain volume, it is 583.
So we’re within a couple of percent of each other.
If we wanted to try and get it closer,
we would have to change our block size
and make it potentially smaller.
So after getting a feel
for what’s happening without estimation.
Obviously we want to have a look at our actual results.
And the best way of course
is to visually compare our goal values
against our underlying sample values.
And we would need to use the same color scheme.
(indistinct) those, so.
Look at a thin slice.
So it would bring in our values and our block model
with the same color scheme,
and then look through it and check the overall
down the line colors of the samples
is matching the colors of the blocks.
Where they’re dominantly low grade in this case, green,
the blocks are green up in here.
But as we go into higher grade areas,
we can see that the blocks it’s starting to change color.
Another way of validating our data
is to look at swath plots.
Again, these are built within the block model themselves,
so we can apply as many swats plots as we wish.
And once they’re built, they’re stored.
So if we want to re-look at the one built
for swath 4,400,
we can see here that I’m comparing…
In this case, I am looking at inverse distance
against kriging for pass two.
And the underlying composites are highlighted in red.
In this case.
So the underline composites are red and the two estimates.
We can see that the kriging and the inverse distance
are very similar to each other.
It’s in the Y direction, Z direction.
We will often see this where we want the trend
to be followed.
And they estimate tends to bisect the highs and lows
where there’s an abrupt change from high to low.
But overall, the trend is being followed.
And that suggests that this is a reasonable estimate.
The other way, of course,
is to look at the actual statistics.
So we can do that on the block model statistics.
We can do a table of statistics.
We would pick out categories.
So in this case, if I select my estimation domains,
None there, and we’ve done 4201, 4,400.
And then we need to pick the values.
So in this case I will look at the combined variable.
So we can see here that the mean for 201 is 15.3.
And the mean of the blocks for 4,400 is 2.085.
If we remember, we can go back to the domain
and look at the statistics of 4201.
The mean there is 14.7.
So just a little bit under 15 and for the other one,
it is 2.65.
So again, a little bit lower in that case,
a little bit higher, a little bit lower.
And then we may want to tweak some of the parameters
to see if we can get that little bit closer.
But luckily in the area where we’re mining,
it might make perfect sense.
And there may be some Edge effects
that causes the global mean
to be not quite exactly what you’d expect.
But it’s important to validate the block’s statistics
against the sample statistics.
The final thing we can look at then is the report
from the block model.
So we can again create reports
built within the block model itself.
We’ve got one already checked here.
So if we select different categorical columns
in this case, building estimation domains.
And whether or not it’s mined
based on underlying stope and development drives
and I’m reporting against a final goal field.
All this is, is the combined variable
and any blocks that weren’t estimated after the three passes
were assigned a very low value.
So that will give us the table.
So we can see for each of the estimated domains,
we’ve got divided up into development and stope panels,
and what’s been left un-mined.
And the results.
And at this stage this is applied at a cutoff.
We can store any number of these in there as well.
So I hope that that has been informative
on things to consider when looking
and running a resource estimate.
So support is available at all times from our Perth office.
Our headquarters is in New Zealand in Christchurch.
And [email protected] is the best place to locate us,
and get ahold of us.
Thank you very much for listening to the presentation today.