Skip to main content

Esta é uma revisão técnica do interpolador da função de base radial para criar fluxos de trabalho eficientes e dinâmicos em modelagem numérica.

We will take a deeper look at how Leapfrog Geo constructs a numerical model in order to build upon and provide a better understanding of the tools available at your disposal.


49 min

Veja mais vídeos sob demanda


Saiba mais sobre a solução para mineração da Seequent

Saiba mais

Video Transcript

<v Suzzana>All right, hello and welcome everyone.</v>

I make that 10 o’clock here in the UK.

And so, let’s get started.

Thank you for joining us today for this tech talk

on Numeric Modeling elite for GA.

By way of introduction, my name is Suzanna.

I’m a Geologist by background based out of our UK office.

And with me today are my colleagues,

James and Andre both of him will be on hand to help

run this session.

Now, I run for this session is

to refocus in on the interplant settings

that will most effectively improve your numeric models

from the word go.

Let me set the scene for my project today.

This is a copper gold porphyry deposit.

And from the stats,

we know that it is our early diorite unit

that is a highest grade domain.

For the purpose of today’s session,

I’m going to focus on modeling the gold asset information,

Some port from these drill holes, but in reality,

I could choose to model any type of numerical value here,

for example, our QT data.

And if that is the workflow that you’re interested in,

then I’d recommend you head over to the secret website,

where there were a couple of videos

on this topic already available.

So coming down to my numeric models folder,

I’m now going to select a new RBF interplant.

And in the first window,

start to specify my initial inputs and outputs.

First, I’m going to want to specify

some suitable numeric values.

So in this case,

I’m going to use my composited gold ISO table,

but I can also choose to apply any applicable query filter.

Now, in this case, as part of my initial validation steps,

I actually created a new column in my collar table

called valid simply to help me identify which drill holes

I want to use in my modeling.

I did this by first of all,

creating a new category selection

to make those initial selections

then created a very simple query filter

simply to exclude any of those that are invalid.

And actually for the most part,

it’s just this one hole at the bottom

that I chose to exclude.

As I work on my interpretation or indeed bring in more data,

then it’s always useful to have flexibility

like this built into your drill hole management workflow.

So I would encourage you to set up something similar

if not already done so,

but for now though, let me just go back to my,

I’ll be up in turbulent definition again,

and let’s start to talk about the interplant boundary.

Now, the default here is set to be our clipping boundary.

And if you don’t know what that is,

or if you want to set that,

then you can actually do that from either the topography

or indeed the GIS data folder.

So I could choose to manually change this boundary

or to enclose it around any object

that already exists in my project,

or indeed an existing model boundary or volume.

Whenever I set here in my interpret boundary

is going to be linked to the surface filter

or by default is linked to the surface filter,

which in itself controls which values can go into the model.

I’m just going to bring a quick slice view into my scene,

just to sort of help us visualize

what I’m about to talk about.

But I will explain this as this space

is just a slice of my geological model.

Just with those sort of gold values seen in a scene here.

Typically your interplant boundary

would be set to enclose the entire data set

in which case all of your specified input values

will become interpolated.

So that’s kind of what we’re seeing in scene at the moment.

It’s just a simple view of everything

that’s in my data and in my project,

I could choose however,

to set a boundary around an existing volume.

So for example,

if I want to just specifically look at my only diorite

Which is anything here in green,

then I can choose to do that.

And if I just turn that on now, then you can see,

we’re just seeing that limited subset

of input values in that case.

Now, interestingly, if I wanted to add this point,

mimic a soft boundary around the early diorite

So for example, some sort of value buffer

say it’s about 50 meters away,

which is this orange line here.

Then I could incorporate this also into my surface filter.

And again, by doing, say,

so if I now select this distance function here,

then again, we’re going to see this update

on which input values can be used into the model.

But now though let’s not complicate things too much.

I’m just simply going to use the same boundary

as my geological model.

And I’m also going to bring down my surface resolution

to something a bit more reasonable.

Now, a rule of thumb with drill hole data would be

to set this to your composite length.

So as to equal the triangulation

or indeed some multiple of this,

if you find the processing time takes too much of a hit

for this particular data set,

I have six meter lens composites already defined.

So I’m going to bring surface ISO down

to multiple of that 12 in this case.

And of course those composites are there

in order to help normalize

and reduce the variance of my data.

If I didn’t already have numeric complicit set up

in my drill hole data folder,

then I could actually go into find these here

as well directly.

But now though, let me just go back and use those,

these composites that I have.

And for now, let’s just say, okay to this and let it run.

All right, now, there will be some times here

that it’s going to be easier just to jump into models

that have already been set up.

And I think this is one of those cases.

So let me just bring in that is going to generate,

which is essentially our first pass gold numeric model.

And if I stick the legend on,

then you’ll start to get idea

or reference point for that grade.

Now, the power of leap frogs on behalf energy engine

is in its ability to estimate a quantity

into an unknown point by using known drill hole

or point data.

It’s important however,

when we estimate those unknown points

that we’re selecting an interpolation function,

that will make the most geological sense

at this first pass stage.

I hope you’ll agree that we’re seeing something

that’s very unrealistic,

especially in regards to this higher grade sort of blow outs

in my Northwest corner.

Is fairly common for our projects

to have some areas of data scarcity.

If I bring my drill holes in here

and just turn the filter off for a second,

then I’m sure you agree that that sometimes at depth

or indeed on our last full extents,

that we might just not have any drilling information

in that area.

And in this case,

it is the, it’s just three drill holes here.

If I put my filter on,

you can see that it’s just these three drill holes

that is simply causing that interpolation here

to become really quite extrapolated.

And unfortunately there’s many an extreme example in line

of such models.

The C’s finding their way into company reporting.

So what I would encourage you all to do,

is to of course start refining the internal structure

of my model and the majority of the functionality

to do so actually sits under the interplant tab.

So if I go to the model

and I’ll go to the one that’s run individually,

so just double click into it

and I’m going to come to the interplant tab.

For now, I’m simply going to change my interplant type

to a splital and change my drift function to known

I will come back to this in more detail, but for now,

let me just let that run and we’ll have a look

at what that produces instead.

And again, great.

Here’s one I prepared earlier,

which we can now see,

and hopefully you can already see

this big notable difference already,

especially where a high grade

or where that original high-grade was blown out.

This time, if I bring my drill holes back in again,

we can see that high grade interpolation

around those three drill hole still exists,

but just with a much smaller range of influence,

but why is that the case to help answer that question,

I’m going to try and replicate these RBF and parameters

on a simple 2D grid of points.

And this should quite literally help connect the dots

in our 3D picture.

So let me move from this for a second

and just bring in my grid of points

and a couple of arbitrary samples.

Hopefully you can see those values starting to come through.

Here we go.

So what I’ve done so far for this is,

I have created three new RBF models

in order to estimate these six points shown on screen here.

And then I will come back into each of the Interplant tabs

in order to adjust the interplant interests settings.

And that’s just what the naming refers to here.

Now, leap for geezers two main interplant functions,

which in very simple terms will produce different estimates

depending on whether the distance

from our known sample points,

the range is taken into consideration.

And linear interplant will simply assume

that any known values closer to the points

you wish to estimate

we’ll have a proportionally greater influence

than any of those further away.

A splital interplant on the other hand

assumes that there is a finite range or limit

to the influence of our known data

beyond which this should fall to zero.

You may recognize the resemblance here

to a spherical very grim,

and for the vast majority of metallic ore deposits,

this interpolation type has more applicable

exceptions to this, maybe any laterally extensive deposit

like coal or banded iron formations.

So in addition to considering the interpolation method,

we must always also decide how best to control

our estimation in the absence of any data.

In other words,

how should our estimation behave

when we’re past the range of our samples,

say in this scenario we saw just a minute ago

with these three drill holes,

but this, we need to start defining an appropriate drift

from the options available.

And I think that was the point that I sort of got into

looking at on my grid.

So at the moment I have a linear interplant type

with a linear drift shown

and much kind of like the continuous column bludgeoned

than I have up here,

we’re seeing a linear,

a steady linear trend in our estimation.

The issue is, it was the estimation around

our known data points is as expected.

The linear drift will enable values

to both increase past the highest grade.

So in this case,

we’re sort of upwards to about a grade of 13 here,

as well as go into the negatives past the lowest grade.

So for great data of this nature,

we’re going to want to reign that estimation in

and start to factor in a range of influence.

So looking now at our splital interplant

with a drift of known,

we can see how the range starts to have an influence

on our estimation and then when we move away

from our known values.

So for example, if I start to come out

onto the extents here, then the estimation is falling

or decaying back to zero.

And that will be the same

if I sort of go around any of these,

we would expect to be getting down to a value as zero

away from any data points.

And of course, if you are close to the data point,

we would expect it to be an estimation similar.

So where you’re modeling an unconstrained area,

or perhaps don’t have many low grade holes

to constrain your deposit,

using a drift of known will ensure

that you have some control on how far your samples

would have influenced.

That said, if you are trying to model something constraints,

for example, the early diorite domain that we saw earlier,

then using a drift function of constant

could be more applicable.

In this case, our values are reverting

to the approximate mean of the data.

So if I just bring up the statistics our mean here is 4.167,

which means that as I’m getting to the outskirts,

I would expect it to be coming back towards that mean.

So a few different options, but of course,

different scenarios in how we want to apply these.

Now, if I jump back now to my gold model,

then let’s start first of all,

just with a quick reminder of where we started,

which was with this model,

and this is applying that default linear interplant,

and that simply by changing this already

to they splital or interplant type

along with a drift of known,

then we’re starting to see something

that makes much more sense.

So if I go back now and go back to my interplant tab

to sort of look at some of these other settings.

So far we know we want to limit the influence

of our known data to a certain distance,

and that it’s the distance reign

essentially controlling that correlation.

It’s reasonable, therefore that you’re going to want to,

you’re going to want to change the base range

to something more appropriate.

And if you don’t know where to start,

then a rule of thumb could be around twice

the drill hole spacing.

So in this case, let’s, that’s around 700,

which is appropriate for this project.

We also want to consider our nuggets and in leapfrog,

this is expressed as a percentage to our some.

Increasing the value of the nugget will create smoother

results by limiting the effects of extreme outliers.

In other words, we would give more emphasis

to the average grades of our surrounding values

and less on the actual data point.

It can basically help to reduce noise

caused by these outliers

or with inaccurately measured samples.

What value to use here is very much decided on a deposit

by deposit case.

And by all means, if you or someone else

in your organization has already figured this out

for your deposit, then by all means apply it here,

perhaps for a gold deposit like this.

Then a rule of thumb will be,

let’s say, 20 to 30% of the sales.

So let’s just take this down.

For example, it’s 0.2, which is going to be 20% of that.

So I have 10% might be more appropriate

for other metallic deposits or indeed known,

if you have a very consistent data set,

we’ve spoken about drift already,

but now that we are pressed the range of influence,

then it’s really our drift that comes into play.

Using known in this case is going to help

control how far my samples will have an influence

before the estimation to case back to Siri.

I’m not going to delve too much

into the other settings here.

You can always read about these

to your heart’s content online,

but the take home point really is that,

it’s the interplant the base range,

that percentage nuggets to the sill and drift

that will have the most material effects

on your numeric model.

Get these rights and you should be well on your way

to producing a very best model

that makes the best of your geological knowledge.

All right, now we’ve talked a little bit about

these interplant settings.

I thought it would be worth a quick run over

on the ISO surfaces.

So we can see all these,

as we expand out our new model objects,

we’ve got ISO surfaces

and then of course the resulting output volumes.

When we pick cutoffs in our outputs tab,

we are simply asking leap frog to define some boundaries

and space and your contours,

where there is the same consistent grade.

Essentially the iso surfaces perform the exact same task

as our geological surfaces do

when we’re geologically modeling.

But instead of building these surfaces ourselves,

we’re simply picking grade boundaries

that we want to see and leapfrog we’ll go

and build them for us.

It is this contour surface that,

that we’re seeing here in 3D.

And if we want to visualize a specific value,

then we can do so by updating the ISO values here

before I do that, let me just jump back

into the grid of points,

just to highlight this on a sort of simpler dataset.

So back in with my grid of points, let’s say for instance,

I now want to visualize a contour surface

with a greater two, then I can replicate that here

simply by setting some appropriate value filters.

And we can see that now

sort of finding all of these sort of values of two in space.

And these it’s this point that of course

is going to become our contour.

For the hikers amongst us

who are maybe used to seeing elevation contour lines

on a TD map.

This is the same principle,

except in the case of our gold model.

We’re simply looking at a value representation in 3D.

So let me go back to my gold model once again

and actually set some better ISO values for this deposit.

So am just going to go double click back in

and head over to the output tab.

So let’s just set a few, make a little bit more sense

than the ones that are put in as default.

So let’s say 0.5, 0.75.

Let’s do 1.25 and at 1.5, I think we’ll suffice.

The other thing that I’m going to do

is take down the resolution on my higher grades

to try and be as accurate as possible.

And remembering to know you might not have heard it,

but remembering my much earlier points

about matching this where possible

to the drill hole complicit lens.

So that’s why I’m sort of picking the six here.

There’s also the ability in this tab to clump output values,

the default assumption is that

nothing should fall low zero,

which is why we’re seeing that clump,

but you can always change that if need be likewise,

you can set different lower,

and indeed upper bounds in the value transform tab

to cap any values you may deem too low or too high,

but I’m not going to focus on that for now.

So let me just let that run

and again, in the magic of eight forklifts,

so have a look at, so what that has produced,

and again, we’re starting to see something visually

that perhaps makes more sense.

That makes more sense to us and to our gold boundaries.

Now, if I want to see more connectivity

between my drill holes, then aside from the base range,

I could use a trend for this.

Also the central shear zone, for example,

is no doubt playing a dominant structural control

on my mineralization.

Starting to see a lot of our high grades,

sort of following some sort of trend here.

So applying a structural trend should help

to account for any changes in the strength

and direction of continuity along this ridge.

I’ve already defined a structural trends for the shear zone.

I sort of put it as something like this for now.

However we could have caused always update this as need be.

Either way, this is a typical step to take.

Given that nature is very rarely isotropic.

So let me go and apply this structural trends to my model

and which I can do from my trend tab.

And I’ve only got the one in my project at the moment.

So it’s just that stretch or trend,

and let’s just let that run.

And again, in a fantastic here’s one I prepared earlier,

we can see what that has done

this definitely you can see how that trend has changed

there is weighting of points in space

to define that continuity along that ridge.

So though powerful and usually they’ll be applicable

certainly in our sort of metallic gold

or indeed any deposit.

So now that we’ve sort of run to this point,

it’s likely at this stage

that we’re going to want to hone in our mineralize domains.

I spoke earlier about the early diorites unit

having our containing our highest mean gold grade.

So let’s come full circle and create a numeric model

for just this for you also.

So let me clear the scene and let me cheat

by copying my last model

with all of its parameters into the new one.

And now we can go into that copy of our model

and start to apply reasonable premises here as well.

Now, the first thing we’re going to want to do is of course,

set some boundary at the moment

it’s just the exact same as the last one,

which is the extent of my geological model.

Say, let’s go in and set a new lateral extent.

And what I’m going to do is use the volume

for my geological model as that extends.

Say from surface, and then under my geological models

under the output volumes,

I’m going to select that early diorite.

Now, whilst that’s running.

Remember that what that is going to do

is first of all, constrain your model

to that unit of interest.

So this is my early diorite.

And it’s also just only going to use the values

that refer to this unit,

what to do with that initial surface filter set up.

We then of course,

want to go and review our interplant settings here

to check that they’re still applicable.

Now that we’ve constrained our data to one domain.

Let me say, let me go back and double click into this model.

And let’s just double check once again,

that our interplant settings make sense.

It’s probably going to be more applicable in this case

that where we have an absence of data.

So again, on the outskirts, on the extents of our model,

that perhaps we’re going to want to revert

to the mean of the grade in the absence

of any other information.

In which case, the best thing that we can do

is simply to update the drift here to constant.

Hopefully you remember from that grid of points,

how that is always going to revert to the mean of the data.

I think for now,

I’m just going to leave every other setting the same,

but of course we could come in

and start to change any of these,

if we wish to see things a little bit differently,

but for now, it’s that drift that for me

is the most important.

So again, let me rerun and bring in a final output.

And if I just make these a little less transparent,

you can hopefully see now

that whereas before we had sort of that those waste grades

coming in on this sort of bottom corner,

we’re now reverting to what?

To something similar of the mean and I think in this case,

if I remember rightly the mean is yeah, 1.151.

So we would expect to be kind of up in that sort of darker,

darker yellow colors.

At this point,

I’m going to say that I’m reasonably happy

with the models I have.

I would, of course, want to interrogate these lot further.

Maybe make some manual edits

or double check some of the input data,

but for the purpose of this talk,

I hope that this has gone some way

in highlighting which interplant settings

will most effectively improve your numeric models.

I appreciate that this is a very extensive topic

to try and fit into a shorter amount of time.

Please do keep your questions coming in.

In the meantime,

I’m going to hand over to James now,

to run us through the indicator RBF interplant tool.

<v James>Thanks, Suzanna,</v>

bear with me two seconds and I’ll just share my screen.

So a lot of the stuff that Suzanna has run through

in those settings, we’re going to apply now

to the indicator numeric models

indicating numeric models are a tool

that is often underused or overlooked in preference

to the RBF models,

but indicator models can have a really valuable place

in anyone’s workflow.

So I’ve got the same project here,

but this time I’m going to be looking at

some of the copper grades that we have in the project.

If I go into a bit of analysis initially on my data,

I can see that when I look at the statistics for my copper,

there isn’t really a dominant trend to the geology

and where my copper is hosted is more of a disseminated

mineralization that is spread across multiple domains.

If I come in and have a look at the copper itself,

what I want to try and understand

is what it would be a good cutoff to apply

when I’m trying to use my indicator models

and this case here,

I’ve had a look at the histogram of the log,

and there’s many different ways

that you can approach cutoffs.

But in this case, I’m looking for breaks

in the natural distribution of the data.

And for today, I’m going to pick one around this area

where I see this kind of step change

in my grade distribution.

So at this point here,

I can see that I’m somewhere between 0.28 and 0.31% copper.

So for the purpose of the exercise,

we’ll walk through today we’ll use 0.3 as my cutoff.

So once I’ve had a look at my data and I have a better idea

of the kind of cutoffs

I want to use to identify mineralization,

I can come down to my numeric models folder,

right, click to create a new indicator.

I’ll be up in turbulent.

The layout to this is very similar to the numeric modeling.

Again, you can see,

that I can specify the values I want to use.

I can pick the boundaries I want to apply.

So for now, I’m just going to use the title project

in all my data.

And if I want to apply my query filters,

I can do that here as well.

A couple of other steps I need to do,

because this is an indicator interplant.

I need to apply a cutoff.

So what I’m doing here is

I’m trying to create a single surface

in closing grades above my cutoff of no 0.3.

And for the purpose of time,

I’m also going to just composite my data here.

So that it helps to run,

but it also helps to standardize my data.

So I’m going to set my composite length to four

and anywhere where I have residual links, less than one,

I’m going to distribute those equally back through my data

can give it a name to help me identify it.

And we’re going to come back

and talk about the ISO values in a minute.

So for now, I’m going to leave my ISO value

as a default of no 4.5, and then I can let that run.

So what leapfrog does with the indicators

is it will create two volumes.

It creates a volume that is considered inside my cutoff.

So if I expand my model down here,

we’ll see there’s two volumes as the output.

So there’s one that is above my console,

which is this volume.

And one that is below my cutoff,

which is the outside volume.

Now, at the moment,

those shapes are not particularly realistic.

Again, very similar to what we saw

with Suzanna’s explanation initially

because of the settings I’m using,

I’m getting these blow outs to the extent of my models.

The other thing we can have a quick look at

before we go and change any of the settings

is how leapfrog manages the data

that we’ve used in this indicator.

So it’s taken all of my copper values

and I’ve got my data here in my models.

So if I track that on,

just set that up so you can see it.

So initially from my copper values,

leapfrog will go and flag all of the data

as either being above or below my cutoff.

So here you can see my cutoff.

So everything above or below

it’s then going to give me a bit of an analysis

around the grouping of my data.

So here you can see that it looks at the grades

and it looks at the volumes it’s created,

and it will give me a summary of samples

that are above my cutoff and fall inside my volume,

but also samples that are below my cutoff

that are still included inside.

So we can see over here,

these green samples would be an example

where a sample is below my cutoff,

but has been included within that shell.

And essentially this is the equivalent

of things like dilution.

So we’ve got some internal dilution of these waste grades

and equally outside of my indicator volume,

I have some samples here that fall above my cutoff grade,

but because they’re just isolated samples,

they’ve been excluded from my volume.

So with that data and with that known information,

I then need to go back into my indicator

and have a look at the parameters and the settings

that I’ve used to see if they’ve been optimized

for the model I’m trying to build.

What we know with our settings,

and again, when we have a look at the inside volume,

if I come into my indicator here,

and go and have a look at the settings I’m using,

then I come to the interplant tab.

And for the reasons that Suzanna has already run through,

I know when modeling numeric data

and particularly any type of grade data,

I don’t want to be using a linear interplant.

I should be using my splital interplant.

The other thing I want to go and look at change in is that

I don’t have any constraints on my data at the moment.

So I’ve just used the model boundaries.

So I want to set my drift to known.

So again, there’s more of a conservative approach

as I’m moving away from data

and my assumption is my grade is reverting to zero.

We can also come and have a look at the volumes.

So currently our ISO value is five.

So we’ll come and talk about that in a second.

And the other good thing we can do here is

we can discard any small volumes.

So as an example of what that means,

if I take a section through my projects,

we can see that as we move through,

we get quite a few internal volumes

that aren’t really going to be of much use to us.

So I can filter these out based on a set cutoff.

So in this case,

I’m excluding everything less than 100,000 units,

and I can check my other settings to make sure

that everything else I’m doing,

maybe I used the typography is all set up to run how I want.

Again, what Suzanna talked about,

was there any time that you you’re modeling numeric data

best practice would be to typically use

some form of trend to your data

so I can apply the structural trend here

that’s Susanna had in hers, and I can let that one run.

Now, when that’s finished running,

I’ve got my examples here.

So if I load this one on,

I can see the outline here of my updated model

and how that’s changed the shape of my indicator volume.

So if we come back out of my section, put these back on,

I can see by applying those,

simply by applying those additional parameters.

So changing my interplant type from linear

to splital adding a drift of known

and my structural trend,

I’ve got a much more realistic shape now

of what is the potential volume of copper

greater than 0.3%.

Now, the next step into this

is to look at those ISO values

and what they actually do to my models,

the ISO value and if we come into the settings here,

so I’m just going to open up the sentence again.

You can see at the end, I can set an ISO value.

This is a probability of how many samples within my volume

are going to be above my cutoff.

So essentially if I took a sample

anywhere within this volume,

currently there is a 50% chance that,

that sample would be above my no 0.3% copper cutoff.

So by tweaking those indicator ISO values,

you actually can change the way the model is being built

based on a probability factor of how many,

what’s the chance of those samples

inside being above your cutoff.

So I’ve created two more with identical settings,

but just change the ISO value on each one.

If we step back into the model as a on a section,

so here we can see our model

which going to take the triangulations off.

So we’ve got the outline.

So currently with this volume,

what I’m saying is that I have a 50% chance

but if I take a sample, anywhere in here,

it is going to be above my cutoff

can also change my drill holes here to reflect that cutoff.

So you can see here, no 0.3% cutoff.

So you can see on my drilling samples

that are above and below.

If I go into my parameters

and I change my ISO value down to no 0.3.

So we can have a look at the inside value on this one,

what this is saying and I just made some,

so you can see it as well.

It’s basically, this is a volume

that is more of a prioritization around volume.

So this could be an example.

If you needed to produce a mean case and a max case

and a mid case,

then you could do this pretty quickly by using your cutoffs

and using your ISO values.

So drop in an ISO value down to 0.3

is essentially saying that there’s a 30% confidence

that if I take a sample inside this volume,

it will be above my cutoff.

So you can see that changing the ISO value

and keeping everything else the same

has given me a more optimistic volume

for my indicator shell.

Conversely, if I change my indicator ISO value to 0.7,

it’s going to be a more conservative shell.

So here, if I have a look at the inside volume in green,

and again, maybe just to help highlight

the differences with these.

So now this is my, exactly the same settings,

but applying a ISO value.

So a probability or a confidence of 0.7.

And again, just to, for you to review the notes.

So increase in my ISO value

will give me a more conservative case.

This will be prioritizing the metal content,

so less dilution.

And essentially I can look at those numbers and say,

I have a 70% confidence that any sample inside that volume

will be above my cutoff.

So I say it very quickly,

and particularly in the exploration field,

you can have a look at a resource or a volume.

And if you want to have a look at a bit of a range analysis

of the potential of your mineralization,

you can generate your cutoff and then change your ISO values

to give you an idea of how that can work.

Once you’ve built these,

you can also then come in and have a look at the,

it gives you a summary of your statistics.

So if we have a look here at the indicator at 0.3,

I look at the statistics of the indicator at 0.7,

we can go down to the volume.

So you can see the volume here

for my conservative ISO value at 0.7

is 431 million cubic meters,

as opposed to my optimistic shell, 545.

You can also see from the number of parts.

So the number of different volumes

that make up those indicators shells in the optimistic one,

I only have one large volume,

whereas the 0.7 is obviously a bit more complex

and has seven parts to it.

The last thing you can do

is you can have a look at the statistics.

So for example, the insight volume,

I can have a look at how many samples here

for below my cutoff.

So out of the what’s that 3,465 samples,

260 those are below my cutoff.

So you can kind of work out a, again,

a very rough dilution factor by dividing

the number of samples that fall inside your volume

by the total number of samples,

which would bring your answer around

in this case for the 0.3 around seven and a half percent.

So it’s just, this exercise is really just to highlight

some of the, again, the tools aren’t necessarily

used very frequently,

but can give you a really good understanding of your data

and help you to investigate the potential of your deposits

by using some of the settings in the indicator templates,

but ultimately coming back to the fundamentals

of building numeric models.

And that is to understand the type of interplant you use

and how that drift function can affect your models

as you move away from data.

So that’s probably a lot of content for you

to listen to in the space of an hour as always,

we really appreciate your attendance and your time.

We’ve also just popped up on the screen,

a number of different sources of training and information.

So if you want to go

and have a look at some more detailed workflows,

then I strongly recommend you look at the Seequent websites

or our YouTube channels.

And also in the, MySeequent page,

we now have all of the online content

and learning available there.

As always we’re available through the support requests.

So drop us an email

and if you want to get into some more detailed workflows

and how that can benefit your operations and your sites,

then let us know as well.

We’re always happy to support

via project assistance and training.

So that pretty much brings us to the end of the session

to the top of the hour as well.

So again, thanks to you for attending,

thanks to Suzanna and Andre for putting this together

and running the session.

And we’ll look to put another one

of these together again in the new year.

So hopefully we’ll see you all there.

Thanks everybody and have a good day.