Learn how you can visualise, model and understand your environmental projects by creating and defining contamination plumes using the robust geostatistical tools in the Leapfrog Works Contamination Extension to get transparent and defensible estimates of contaminant mass and location in your projects.
The webinar will cover:
- An introduction to the Contaminants Extension for Leapfrog Works.
- The Contaminants Extension Workflow – with a brief overview of:
- Data Preparation and Visualisation
- Domaining and Exploratory Data Analysis
- Model Validation
- Live questions and answer session.
Customer Solutions Specialist – Seequent
Senior Technical Lead – Seequent
<v Aaron>So hello everyone</v>
and welcome to today’s webinar
on the Contaminants Extension for Leapfrog Works.
Today we’ll be looking at the demonstration
of the Contaminant Extension
with time at the end of the webinar for your questions.
My name is Aaron and I’m a Customer Solution Specialist
here at Seequent, based in Brisbane office
and I’m joined today by Steve Law
our Senior Technical Lead for Geology.
<v Steve>Hello, I’m Steve</v>
and I’m a Senior Technical Lead
and I’m based here in (mumbles).
<v Aaron>Today we’ll first go through</v>
a bit of background on Seequent
and the Contaminants Extension.
I’ll then hand over to Steve who will take us through
the Contaminant Extension in Leapfrog Works itself.
After that we’ll go through any questions that anyone has.
So please if you do have any at any time,
put them in here at the question or the chat window
and go to webinar and we’ll get to them.
There’s some background on Seequent.
We are a global leader
in the development of visual data science software
and collaborative technologies.
With our software you can turn complex data
into geological understanding, providing timely insights
and give decision makers confidence.
At Seequent, our mission is to enable our customers
to make better decisions about their earth
and environmental challenges.
The origins of Seequent go back to the 1990s
when some smart people
at the University of Canterbury in New Zealand,
created the fast radio basis functions
for efficiently generating 3D surfaces from point data.
One of the applications that was seen for this
was a better way to build geological modeling.
And so in 2004, the company ARANZ Geo was founded
and the Leapfrog software was first released
to the mining industry.
Since then the company has grown and evolved
entering different industries
such as geothermal energy, environmental and civil.
And in 2017, ARANZ Geo rebranded to Seequent.
In 2018, Seequent acquired Geosoft
the developers of Geophysical Software
such as a Oasis montaj.
And then in 2019, acquired GeoStudio,
the developers of Geotechnical Analysis Software.
During this time Seequent has continued developing
our 3D geological modeling in Leapfrog
and collaboration tools such as Central.
Earlier this year,
Seequent was acquired by Bentley Systems
and has become a Bentley company
which opens us up to even more possibilities
for future development for geoscience solutions.
Seequent have office globally
with our head office in Christchurch, New Zealand.
And more locally we have offices in Australia,
in Perth and in Brisbane.
Like I mentioned Seequent’s involved
in number of different industries
covering things from a contaminant modeling,
road and toll construction,
groundwater detection and management,
geothermal exploration, resource evaluation
and so much more.
Today we’re looking at contaminant
or contaminated and environmental projects.
Seequent provides solutions across the life
of contaminated site projects
from the initial desk study phase
to the site investigation
through to the remediation, design and execution.
These solutions include our geophysics software
such as Oasis montaj and VOXI.
Our 3D visualization and geological modeling
and contaminant modeling in Leapfrog Works
as well as flow and transport analysis in GeoStudio.
Across all of this we have Seequent Central
a cloud-based model management and collaboration tool
for the life of the projects.
Like I said, today we’re specifically looking
at the Contaminants Extension in Leapfrog Works.
So what is the Contaminants Extension?
The Contaminants Extension
is an optional module for Leapfrog works
to enable spatial modeling of numeric data
for defining the concentration, distribution
and mass of the institute contamination at a site.
The Seequent Contaminants Extension provides
a transparent and auditable data-driven interpretation.
It provides interactive visual tools
with the extension that makes geostatistics
accessible for all geoscientists
ensuring that you and your teams
can work the best with your data.
It has built-in report tables
and cross-section evaluations
that are dynamically linked
to your plume models and your data.
So they will update automatically
as the investigation and the project evolves in new day
that comes into your Leapfrog project.
It also means there’s an interrogatable estimation tool
that provides easy access to the details
for how you’ve created your informed decisions.
In short, it allows you to take known data for your project
and analyze it with spatial variability
of the concentrations for your data
and it helps you define your projects.
So that’s what it is.
Why did we develop it? And why is it useful?
Well, we did it because we wanted to provide
these accessible best practice solution tools
for you to model your contamination plumes.
We wanted to make characterizing contaminated land
in groundwater, in a rigorous and auditable way
that combines 3D dynamic geological Models
with these best practice geostatistical methods.
The ultimate goal of estimation
is to combine the qualitative geological interpretation
of your project.
So your geological models
with the sparsely sampled quantitative data
to create spatial predictions
for the distribution of that contaminant plan.
The Contaminants Extension
coupled with Seequent Central provides a wholistic way
for the engineer and modeler for a project
to effectively understand
and importantly communicate the site conditions
with both internal and external stakeholders
in a collaborative environment.
The graphic here shows an overview of the flow
from Leapfrog to Central as part of the solution.
Starting at the base here and moving up
we can see that with your investigation data models
and estimations can be created by the modeler
and the environmental engineer for instance.
And using Central,
they can share this and collaborate on this
with internal team members such as managers,
reviewers or other modelers
to build up the understanding of the site
or as part of the project review process.
At the very top of this infographic here
we can then see how we can use Central
to engage with external stakeholders
such as the clients or other contractors
and you can bring them into the Central project again
to communicate in an interactive and visual way.
The Seequent solution for the contaminant projects then
is all about knowing the impact
of environmental contamination for your projects.
Through first seeing the problem
in a clear, intuitive and visual way.
The picture is worth 1,000 words
a 3D visualization of the site has to be worth a million.
With a Contaminants Extension,
it’s an interactive dynamic site model
you can create and understand
and aid in your analysis and your assumptions
and recommendations for your reports.
I’ll now hand over to Steve
who will take us through Leapfrog
and the Contaminants Extension.
So we’ll just switch over from the slides here. (mumbles).
<v Steve>Thanks very much Aaron</v>
for that great introduction.
So I’m going to go through the
Contaminants Extension Workflow.
The end result is a resource model
and through the whole process we have a look at our data.
We go through the mining,
run through estimations and validate and report at the end.
Throughout the whole process it’s not a linear
it can be a cyclical process
so that whilst we’re at any stage
we can always go back to our geology
and look at how things are relating to each other.
We’re going to go through a couple of key steps
that we need to consider whilst building up a plume model.
So the first part is being well aware of our data
and how we can visualize it very easily
within Leapfrog Works.
So we want to consider things,
where is our data and how is it located especially?
We have tools in-built that we can check
and make sure our data is validated
and we have really good visualization tools.
So within this project we have three models,
the first one is a geology model
and you can see here that it’s got outwash
and ice deposits until above a bedrock sequence.
And within that we have a range of vertical boreholes
and these have been sampled generally in isolated locations
for a chlorine contaminant
and that’s what we’re going to be evaluating.
As well as the geology model,
we have a water table model.
And this has been divided into a saturated
and made (mumbles) again above bedrock.
So one of the first things we want to consider is,
is there any specific way that we should domain our data?
To ask some of them, is chlorine associated with pathology?
Is it associated with water table or a combination of both?
And we have some tools within Leapfrog Works
that can show this.
So up in the boreholes’ database here,
I have a table with the contaminants
and then I’ve got a geology table.
We have a merge table function
so I’ve been able to merge these together.
And this enables me then to evaluate them
using some statistical tools
in this case I’m using a box plot.
And so this was here as a box plot of the geology
against the contaminant.
And we can see here that that is telling me
that the outwash and local ice to contact deposits
and the two, have the highest concentration of chlorine.
And then there’s a lot less amount
in the outwash deposits and bedrock.
We can also have a look at this as a table of statistics.
An idea of…
We have a little bit total concentration.
I can see the actual values within each one.
One of the key things here is
looking at the number of samples
so we have only two samples within the bedrock.
We’d identified that the outwash and ice deposits
and the two had the greatest concentrations
and we’ve got the most number of samples in those as well
so we’ve got approximately 70 samples within the ice.
And we’ve got the mains
so we’ve got much higher means within ice
and very low within the outwash deposits
and again quite low
but there’s only two samples within bedrock.
So in this instance there’s no clear
if we have a look at (mumbles)
water table, we can see that both are saturated
and the vital sign has significant concentrations
and there’s not one particularly higher than the other.
Saturated is a little bit more,
but it’s not obviously completely different.
So the way we’ll consider this then
is I’m just going to separate a single domain
that’s above bedrock and not taking into account
whether it’s saturated or (mumbles).
In this way, I have a geological model then
that is just domained.
So, once we’ve decided on
how we’re going to domain our data,
then we start using the Contaminants Extension information
and we can link it to our geology model
right through to the resource estimate.
And we can do exploratory data analysis
specifically on the domain itself.
So we have a look at the domain model.
And in this case
this has now been all of the mythologies about bedrock
have been combined into a single domain
which is the yellow one on the scene.
And we can see that the majority of the samples
are within that and there’s just the one isolated to
below in the bedrock which we won’t be considering
in this particular estimate.
When we build up estimation,
we have what’s termed as contaminants model folder.
So when we activate the Contaminants Extension License,
it displays up as a folder in your project tree
and there’s a estimation sub folder
and a block model’s folder.
The estimation folder can be considered to be
where you set up all your parameters.
And then the block model is where we display
all of our results
and we also do all our reporting and validation
from that point as well.
So I’ve decided to do a title constraint traction
of this chloride within the unconsolidated domain
which is this yellow block here.
So when we create a domain estimator,
we just select any closed solid within our project
and you can see here that
this is the unconsolidated sediments
within our domain model.
And then the numeric values are coming directly
from our essay table, contaminant intervals,
So we can choose any numeric column
within any part of our database.
And also if we were storing point data
within the points folder over here,
we would be able to access that as well.
So it doesn’t have to be within drew holes .
I’m not applying any transform at this stage
and we also have the capability of doing compositing.
This data set doesn’t render itself to compositing
because it’s got isolated values within each holes
really only got one or two samples
compositing small for if we’ve got a
continuous range of samples down the borehole,
and we’d want to composite that into large intervals
to reduce their variance.
So within each estimation for each domain,
we have this sub folder.
And from there we can see the domain itself.
So we can double check that that is the volume of interest
and we can display the values within that domain.
So they have become point values at this stage
and you can label those
and you can also do statistics on these.
So these now become domain statistics.
So by right clicking on the values,
I get a histogram of those values there.
As with all Leapfrog Works graphs,
they are interacting with the 3D scene.
So let’s just move this over here.
If I wanted to find out where these core values were,
I can just highlight the one that on the graph
and that will show up in the same format.
Let’s do these ones about 100, there they are there.
So gives us then idea of whether or not
we need to create sub domains.
If all the high grades were focused a little bit more
we might be able to subdivide them out.
But in this case,
we’re just going to treat everything in the one domain.
The special models folder is where we can do variography
which is a spatial analysis of the data
to see whether there are any specific directions
where the concentration grades might be aligned.
I’ll open this one up.
So the way the variography works
is that we select the direction
and then we model it in three directions.
One with the greatest degree of continuity et cetera.
This radio plot is a plane within this.
We’re looking in a fairly horizontal plane here
and you can see that we have…
we can always see the ellipsoid that we’re working with
so to see whether it’s in the like
we wanted to make that a little bit flatter.
We didn’t leave it in a 3D scene
and it will update on the form here
and the graphs will update.
We can use this radial plot.
This may suggest that this orientation
has a little bit more continuity and we’ll re-adjust.
This particular data set doesn’t create
a very good experimental variogram
so we do have some other options
where we could use a log transform prior to that
so here I’ve done a log transform on the values
and that was simply done by at the very beginning.
When I selected my samples
I could apply a log transform in through here.
This gets rid of some of the variability
especially the range and extreme differences
between very high values and very low values
and this can help develop a better semi-variably model
if we have a little bit this one.
You can see here that the semivariogram
is a little bit better structured than it was before.
Again this data is quite sparse.
So this is about the best we could do
with this particular data set.
We have the ability to pick a couple of range of structures,
we have a nugget plus a potentially two structures
but in this case I’ll just pick a nugget,
a fairly large nugget and a single structure
which is sufficient to model this.
We can get an idea of that model.
So this is in the…
The range is quite large, 1200 to 1400 meters.
And that’s because the data itself is quite widely spaced.
And we’ve only got a very short range
in the Z direction because we’re looking at a
fairly constrained tabular body of potential contamination.
The advantage to building the variography
within the contaminants extension is that
everything is self-contained,
and once we hit the side button,
then that variogram is all ready to use
within the estimations itself.
We also have a concept of variable orientation.
Sometimes the design of the Contaminant
isn’t purely tabular
and there may be local irregularities
within the surfaces.
So we can use those surfaces
to help guide the local orientation of the variogram
when we’re doing sample selection.
So just don’t want it here already.
So the way we do that
is that we set up a reference surface
and in this case it’s the actual base
over the top of the bedrock
and we can see here that we’ve colored this by a dip.
So we can see that we’ve got some varying dips
that flat up in the middle
and slightly more dips around the edges.
If I can hop and look cross sections through there,
we can see that what will happen
is when we pick out samples,
the ellipsoid will be orientated as local
and so overall ellipsoid is in this general direction,
but where it gets flat out,
it will rotate in a flat orientation.
The actual ratios and the variogram parameters themselves
are still maintained.
It’s just the orientation
for when the samples are being selected.
When we’re doing an estimation,
we have to consider a few different things.
So we need to understand what the stage our project is
and this might help us work out
which estimation method to use.
If we’ve got early stage,
we might not have very much data.
We might not have sufficient data to get a variogram.
So we will need to use either nearest neighbor
or inverse distance type methods.
There can be a lot of work to go into the search strategy.
And that is how many samples are we going to use
to define a local estimate.
And then once we run our estimate,
we need to validate it against the original data
to make sure that we’re maintaining the same kind of means
and then we want to report our results at the end.
To set up the estimators,
we work through this estimators sub folder here
and if you right click
you can go have inverse distance nearest neighbor kriging
and we can do ordinary kriging, simple kriging
or we can use the actual RBF estimator as well.
Which can be quite good
for when we haven’t got sufficient data to a variogram.
It’s a global estimator.
Using the same underlying algorithm
that is used for when we krig the surfaces
or the numeric models that can be run
within Leapfrog Works itself.
To set one up
once I open we can edit that,
we can apply clipping which is
if we want to constrain the very high values
and give them less influence,
we can apply that on floor.
In this case because I’ve actually looking at contaminants
I don’t really want to cut them down initially
so I will leave them unclicked.
I’m just doing point kriging in this case
so just I’m not using discretization at all.
In some cases if I want to block kriging,
then I would increase these discretization amounts.
I’m using ordinary kriging.
This is where we determine ordinary or simple.
And this is the variogram.
So you can pick.
It will see the variogram within the folder.
You set to a particular variogram
and that’s more for making sure
that it’s set to the correct orientation.
And then it’ll automatically go to the maximum range
of that variogram but you can modify these
to be less than that or greater.
Depends on the circumstances
and often you’ll do a lot of testing
and sensitivity analysis to determine
the best scope for this one.
And then we can choose
minimum and maximum number of samples.
And this effectively becomes
our grade variable within the block model
and we can store other variables as well.
So if we’re using kriging,
we can store things such as kriging variance,
slope and regression
which would go as to how good the estimate might be.
We can store things such as average distance to samples
and these may help us with classification
criteria down the track.
So I’ve created us an ordinary kriged estimator here
and then I’ve also got one.
This one is exactly the same,
but in this case I’ve actually applied
the variable orientation.
So we’ve set up a variable orientation within the folder.
It will be accessible in our parameter set up through here
and I’ll give it a slightly different name.
This one I will need to compare the kriging results
with and without variable orientation apply.
So we set up all of our different parameters
in this estimation folder.
And we are then able to create a block model
to look at the results.
Setting up a block model is quite simple
so we can create a new block model
and we can also import block models from other software.
And when we’ve created one,
we can edit it by opening it up.
And in this case,
we’ve got a block size of 80 by 80 meters
in the X, Y direction and five in the Z
so we’re looking at that fairly thin.
And if we’re looking at it again very visual
and we can see our block size.
So this is 80 by 80 in the X Y direction here.
And we can change these boundaries at any time.
So once it’s been made,
if we wanted to adjust it,
I could just move the boundary up
and it will automatically update the model.
I want to do that now so it doesn’t rerun.
A key concept when working with block models
within the Contaminants Extension is this evaluations tab.
So basically whatever we want to flag into the block model
needs to come across from the left-hand side
across to the right.
So what I’m doing here
is I’m flagging each block centroid with the domain model
so that will flag it with whether it’s within that domain
but we can also flag at geology.
So this will put into whether it’s two
or whether it’s bedrock
and also avoid us saturated sides
could be flagged as well.
And then this is Y grade or contaminant variables here.
So I’m just putting in all four that are generated
so I’ve got an inverse distance,
I’ve got two different krigings.
And also the nearest neighbor.
Nearest neighbor can be useful
for helping with validation of the model.
Again, any of these can be removed or added at one time.
So if I did want to look at the RBF,
I would just bring it across
and then the model would rerun.
So we can see here, the model has been generated.
And one of the key things for validation of first
is to actually look at the samples against the results.
So if I just go into a cross section
making sure that our color schemes
forbides the data and the bot model are the same
and we can import these colors
so if we’ve got colors defined for our essays up here,
we can export those colors
like create a little file,
and then we can go down to our block model
and we can import those colors down here.
So I did on this one, look at that colors import
and then pick up that file.
And that way we get exactly same color scheme.
And what we’re hoping to see
is a little we’ve got yellows
we should hopefully have orange or yellows around,
higher grades will be in the reds and purples
and lower grades should show up.
Of course we have a lot of data in here.
There we go.
So we can see some low grade points over here
and that’s equating.
We’ve got really good tools for looking at the details
of the results on a block-by-block basis.
If I highlighted this particular block here,
we have this interrogate estimator function.
And this is the estimator that it’s looking at.
So like ordinary krig without variable orientation
it’s found there’s many samples
and depending on how we’ve set up our search,
in this case it’s included all those samples.
It’s showing us the kriging lights
that have been applied and we can sort these
and distances from the sample.
So we can see the closest sample used was 175 meters away
and the greatest one was over 1,000 meters away.
But it’s got lights through there.
Part of the kriging process
can give negative kriging weights
and that is not a problem
unless we start to see negative grade values.
In that case, you might need to change
our sample selection a little bit
to eradicate that influence.
That’s just part of the kriging process.
You can see also that when we do that we can…
it shows the ellipsoid for that area.
And if we filter the block model
and I’ll just take away the ellipsoid for a second,
it shows you the block and it shows you the samples
that are being used.
So you’ve got a nice spatial visualization
of what samples are actually being used
to estimate the grade within that box through there.
And if you want to try something else,
you can go directly to the estimator and modify.
If I want to change this to three samples,
reduce these chains to search
and you can get see what the impacts could be. Okay.
So apart from visualization
and looking at on a block-by-block basis,
we can also run swath plots.
So within the block model itself,
we can create a new swath plot.
And the beauty of this is that
they are embedded within the block model
so that when we update that data later on,
the swath plots will update as well
and the same goes for reports.
If we build a report on the model
then if we change the data,
then the reports will have tables.
So we’ll have a look at a swath
that’s already been prepared
and you can store any number of swaths within the model.
So what this one is doing
is it’s looking at the inverse distance.
It’s got the nearest neighbor which is this gray
and then we’ve got two different types of kriging.
So we can see straight away.
And it’s showing us in the X, Y and Z directions
relative to the axis of the block model.
So if we look in the X direction here,
we can see that all three estimators
are fairly close to each other
and what the red line is is the actual original sample data.
So these are the sample values
and what we generally do want to see
is that the estimators tend to bisect
whether we have these highs and lows,
the estimated values we should run through the middle.
Also the estimator should follow the trend
across the data.
The nearest neighbor can be used as a validation tool.
So if one of these estimators
for instance was sitting quite high
up here above their sniper line
they would suggest that potentially where I over estimated
and there might be some problems with our parameters
so we would want to change that.
Again we can export, we can copy this data
and put it into a spreadsheet.
Sometimes people have very specific formats
they like to do their swath plots in.
So that will bring all the data out into Excel.
So you can do that yourself
or we can just take the image
and paste it directly into our report.
Okay. If we want to report on our data,
we create the report in the block model.
And in this case, I’ve got just the basic report.
That’s looking at the geology and the water table
and it’s giving us the totals and the concentration.
And this is by having to select these categorical columns
and then we have our grade or value columns.
So we need at least one categorical column
and one to recreate a report
but you can have as many different categories
as you like within there.
And these are also act like a pivot table almost
so if I want it to show geology first
and then want a table,
I can split it up through there like that.
We can apply a cutoff or we can have none as well.
Another advantage within the block models (mumbles)
is that we can create calculations and filters
and we can get quite complex with these.
We have a whole series of syntax helpers
which are over here on the right.
And then in middle here,
it shows us all of our parameters
that are available within the block model
to use within a calculation.
It’s also a nice little summary of the values
within each estimator.
Now you saw here for instance,
I can see that I’ve got a couple of negatives
in this ordinary krig without the variable orientation.
So I’d want to have a look at those
to see where they are and whether I need to remove them
or do something about them.
If I just unpin that for a second
we can see, we can get quite complex
we’ve got lots of if statements we can embed them.
So what’s happening with this one
is I’m trying to define porosity
based on the rock type within the model.
So I’ve defined, if we go back to here,
I can look at the geology.
So the geology is being coded within to the model.
And if I stick on that block you can see here
the geology model is till and the water table is saturated.
So I can use any of those components (mumbles)
so stop saying, “If the domain model is still,
I’m going to assign a plus to the 8.25.”
And that’s a fairly simple calculation
but you can put in as many complex calculations as you wish.
This one title concentration is saying that
it must be within unconsolidated sediments.
And what this one is doing is saying,
what happens if I’ve got some blocks
that haven’t been given a value.
So I can say that if there is no value,
I’m going to assign a very low one
but otherwise use the value if it’s greater than zero,
use the value that’s there.
So that’s one way of getting rid of negatives
if you know that there’s only a
couple of isolated ones there.
And then this one is an average distance.
So I store the average distance within the model.
So I’m using this as a guide for classification.
So stop saying if the average distance
to the samples is less than 500 meters,
I’ll make it measured less than 1,000 indicated
less than 1500.
The beauty of any calculation, numeric or categorical
is that it’s available instantly to visualize.
So we can visualize this confidence category.
So you can see that there,
I’ve just got indicated in third
and a little bit of measured there as well.
Let’s split that out. (mumbles)
So we can see here,
that’s based on the data and the data spacing.
We might not use those values directly to classify
but we can use them as a visual guide
if we wanted to draw shapes to classify certain areas.
So once we’ve done that kind of thing
we can build another report based of that calculation.
So in this case we’re using that calculated field
as part of that categorical information.
And so now we’ve got classification,
the geology and the water table in the report.
And again that’s saved automatically and it can be exported.
We can copy it into an Excel Spreadsheet
or you can just export it as an Excel Spreadsheet directly.
Okay. So that’s the main usage
of the Contaminants Extension.
And I’ll just pass right back again to Aaron to finish it.
<v Aaron>Fantastic. Thanks Steve.</v>
We have time now for questions as well.
So if you have them already,
please feel free to put a question in the question box
or you can raise your hand as well
and we can unmute you
and you can verbally ask the question as well.
While we give people a chance to do that,
beyond today’s meeting there’s always support for Leapfrog
and the Contamination Extension
by contacting us at [email protected]
or by giving our support line a phone call.
After today’s session as well,
we’d love to give the extension of Leapfrog a go.
We have a trial available
that you can sign up for on our website
on the Leapfrog Works products page
which you can see the URL therefore.
We’ll also be sending an email out after this webinar
with the recording for today
and also a link for a four-part technical series
on the Contamination Extension
which will take you through step-by-step
how to build the model that Steve has shown you
from scratch and the process behind that.
And includes the data there as well.
So that’s a fantastic resource we’ll be sending to you.
We also have online learning content
And we also have the
Leapfrog Works and Contaminants Extension help pages
will look out at them later.
And that’s a really useful resource
if you have any questions.
So yeah like I said, if you have questions,
please put them up or raise your hand.
One of the questions that we have so far is
so often contaminants sources
are at the surface propagating down
into the soil or groundwater
and how would you go about modeling a change
in the directional trend
I guess of the plume going from vertical
as it propagates down to spreading out horizontally.
I guess once it’s an aquifer or an aquatar.
So can you answer that for us please Steve?
<v Steve>Yep so everything about the Contaminants Extension</v>
is all about how we set up our domains
at the very beginning.
So it does need a physical three-dimensional representation
of each domain.
So the way we would do that would
we would need acquifer surface and the soil.
We would have one domain and say that
then you’d model your aquifer as a horizontal layer.
So you would set up two estimation folders,
one for the soil, one for the aquifer.
And then within the soil one,
you could make your orientation vertical
and within the aquifer you can change it to horizontal.
And then you get…
You’re able to join these things together
using the calculation so that you can visualize
both domains together in the block model.
So you only need one block model
but you can have multiple domains
which will get twined together.
<v Aaron>Fantastic. Thank you Steve.</v>
Another question here is,
were any of those calculations
automatically populated by Leapfrog
or did you make each of them?
If you made them, just to be clear,
did you make them to remove the negative values
generated by the kriging?
<v Steve>No, they’re not generated automatically.</v>
So I created all of those calculations
and you don’t have to force removal of the negatives.
It really comes down to first
realizing that negative values exist.
And then we would look at them in the 3D scene
to see where they are.
Now, if they’re only the isolated block around the edges
quite a bit away from your data,
then they might not be significant.
So then you could use the calculation to zero them out.
Sometimes you get a situation
where you can get negative grades
right within the main part of your data.
And this is the thing called a kriging et cetera.
and the kriging process can cause this.
So in that case you would have to
change your search selection.
So you might have to decrease the maximum number of samples.
You might need to tweak the variogram a little bit
and then you should be able to get
reduce the impact of negative weights
So that you won’t get negative grades.
<v Aaron>To add to that first part of the question as well.</v>
Once you have created your calculations,
you can actually go in and export them out
to use in other projects as well.
So kind of… Where are we?
Calculations are cool.
So calculations and filters.
So in this calculations tab here,
you can see we have this option here
for importing or indeed for exporting here as well.
So once you set it at once,
you can reuse that in other projects.
So it’s very easy to transfer these across projects
maybe change some of the variable names.
If you need to, it’s different under different projects.
And it’s once you’ve set at once
it’s very easy to use those again
and that’s a files, it’s a calculation file
that you can send to other people in your company as well.
<v Steve>So if you don’t have the same variable names,</v>
what it does,
it just highlights them with a little red,
underline and then you just replace it
with the correct variable name.
So yeah easy to set things up
especially if you’re using the same format
of block model over a whole project area
for different block models.
<v Aaron>Fantastic. And yeah as a reminder,</v>
if you do have questions,
please put them in the chat or raise your hand.
We have another question here.
Can you run the steps to include the 95 percentile?
<v Steve>Not directly,</v>
actually we can get…
We do do log probability plots.
So if I go back up to this one here yet
so my daily statistics of strike first up
get a histogram,
but I can change this to a low probability plot.
So then I’ll be able to see where the 95th percentile is.
Is down there.
So the 95th percentile is up through there like that.
<v Aaron>Awesome. Thanks Steve.</v>
So can you please show
the contamination plume advance over time?
And can you show the database with the time element column?
So we don’t have any time or…
<v Aaron>monitoring data set up for this project</v>
but you could certainly do that.
So you could just set up your first one,
make a copy of it and then change the data to the new,
but the next monitoring event.
So if each monitoring event,
you’re just using that separate data
in the same estimations, in the same block models
and also in with the contaminants kit.
If you go into the points folder,
you do have this option for
import time dependent points as well.
So you can import a point cloud
that has the time data or attribute to it.
You can actually filter that.
So visually you can use a filter
to see the data at a certain point in time
but you could then actually
if you wanted to, you could set up a workflow
where you set filters up
to use that point data in your block model
and just filter it based on time as well.
<v Steve>So when you create a estimator,</v>
so this one here,
you can see here you can apply a filter to your data.
So yes, if you divided up
created pre filters based on the time,
you can run a series of block models.
Well actually you could probably do it
within the single block model.
So you would just set up
a different estimator for each time period,
estimate them all and put them all,
evaluate them all into the same block model
and then you could kind of look at them and how it changes.
<v Aaron>Yeah. Awesome.</v>
<v Steve>That’d be a few different ways</v>
of trying to do that.
<v Aaron>We have a question here</v>
which is, when should you use the different
or when should you use the different estimation methods
or I guess?
<v Steve>Guess so. It really comes down</v>
to personal choice a little bit
but generally, we do need a certain degree
of the amount of data to do kriging
so that we can get a variable variogram
So early stages,
we might only be able to do inverse distance
whether it’s inverse distance squared or cubed et cetera.
It’s again personal preference.
Generally though, if we wanted to get
quite a localized estimate,
we may use inverse distance cubed,
a more general one the inverse distance squared.
And now if we can start to get a variogram out
it’s always best to use kriging if possible
it does tend.
You find that, especially with sparse data,
you won’t get too much,
the results shouldn’t be too different from each other
but kriging does tend to work with it better.
The RBF, I don’t use it much myself
but it can be quite useful to get early stage global idea
of how things are going.
<v Aaron>Yeah. So when you have sparse</v>
so not much data can be useful in those initial stages
maybe as pilot or desk study phase or something.
<v Steve>Yeah. Yeah.</v>
<v Aaron>For some case (mumbles).</v>
Another question here,
can you please explain the hard and soft boundaries
that appeared in the box just before
so in that initial estimation and this one here.
<v Steve>So what this is,</v>
is that in this case it’s not
or that sort of relevant.
There are only two samples outside our domain.
So if I bring back up my domain boundary,
let’s do that, okay.
So hard and soft basically
is what happens around the boundary of the domain itself.
How do I treat the samples?
So a hard boundary in 99% of the time,
you will use a hard boundary.
Is that you saying,
basically I’m only looking at samples
within the domain itself,
and I don’t care about anything outside it.
The instance where you may use a soft boundary
is if there’s a good directional change,
say the concentration can’t be limited directly
to one of the main or another.
And there’s a good directional change across that boundary.
By using a soft boundary, we can define,
we could say 20 meters.
And what that would do is we’ll look at for samples,
20 meters beyond the boundary in all directions.
And we’ll use those as part of the sample selection.
You still only estimate within the domain,
but around the edges you use samples
that are just slightly outside.
So that’s the main difference between hard and soft.
<v Aaron>And I guess going back</v>
to a question and an answer
that was before around using multiple domains
to kind of handle the vertical and then horizontal
would it be fair to say
that maybe you’d use the soft boundary
in that first one to accommodate the values near
really that domain boundary into the horizontal?
<v Steve>That’s right.</v>
And I mean, one of the beauties of a Leapfrog Works,
the Contaminants et cetera,
is that it’s so easy to test things.
So once you’ve got one of these folders set up
if you want to try something a little bit different,
you just copy the entire folder
and you might pick a slightly different domain
or if you’re going to do it on a different element,
choose it there.
And let’s just copy. That’s fine.
And it pre copies basically any very variables
that you might have done or you estimated is a set up.
And then you can just go back into that one and say,
I want to try something that’s little bit different
with this kriging set up.
So you might try a different distance,
some different samples et cetera.
So then you would just run them,
evaluate that into your bottle
and look at it on a swath plot
to see what the difference was.
<v Aaron>Awesome. Thanks Steve.</v>
I think you touched on maybe this question a little bit,
but is there an output or evaluation or really…
How can you show uncertainty of the block model?
<v Steve>Okay. So there are lots of different ways</v>
of trying to look at uncertainty.
The simplest way is just looking at data spacing.
And that’s a little bit what we’ve tried to do
with the calculation field before.
So I open up my calculations again.
So we stored the average distance to the samples
and that was done.
I’m here in the outputs
so I stored the average distance sample.
You can also store the minimum distance
to the closest sample.
And then again by making up
some kind of a calculation like this,
if that’s not working, then I can change.
And I could change that to 800, 1200 et cetera
until it sort of makes sense.
So we can use (mumbles).
<v Steve>We can use distance buffers,</v>
like let’s add on
Contaminants Extension functionality within works
and that might help you as well
decide on what kind of values to use
within these calculations.
Classification, if you’re using kriging,
if you have used directly kriging variance
as it often quite nicely to use spacing as well.
So there’s lots of different ways,
but in more of the LT user.
<v Aaron>Fantastic. Well thank you so much Steve.</v>
We’re coming to the end of…
We had a lot of time now.
Thank you, everyone who joined
and particularly if you asked a question.
If we didn’t get to your question
or if you think of one after today’s session,
then please feel free to reach out
at the [email protected]
And if there are anything we couldn’t get though today,
we’ll also reach out directly to people.
As I said, you will be receiving a link after today
with the recording as well as
the link for that technical series,
which is a really good resource to go through yourself
using the Contaminant Extension.
And if you don’t already have access to it,
we can give you a trial of it as well.
So you can give it a go.
Thank you so much for joining today’s webinar.
We really hope you enjoyed it
and have a fantastic rest of your day.
<v Steve>Thank you very much.</v>