Discover how you can use Leapfrog Works + Contaminants Extension to create a probabilistic model from CPT data.
This project uses borehole data from an offshore windfarm and during the session we will go through the steps to create an Indicator Kriged block model, in order to quantify the risk from clay in the soil profile.
Customer Solutions Specialist, Seequent
<v ->There are many factors to be taken into consideration</v>
when designing monopiles for a wind farm.
One will be the sole profile
that the foundations will intersect
and the risks posed by the presence of clay.
Using the geostatistical tools available
in Leapfrog Works, this risk can be quantified.
I’m Carrie Nicholls, a customer solutions specialist
here at Seequent, and over the next 20 minutes,
I will take you through each step
to create an indicator krig block model
to be able to quantify the probability of clay
in a soil profile.
In the project, we have borehole data.
As you can see that it’s already loaded
into the scene here.
On the borehole data, we have the CPT data.
And not all of the boreholes do have CPT data.
So the ones that don’t have CPT data are represented
by these borehole traces,
which are the gray lines here.
And where we have CPT data,
you can see the colored columns here.
The extent of this project is
looking directly down here is around 14 kilometers,
both North, South, and East West.
And on average, these boreholes are about 50 meters depth.
You can also see that there’s a bit of a data gap
in the middle here.
The boreholes are kind of in two distinct zones
in the North and the South.
So eventually we will model those areas separately.
The CPT data was then used to create
or calculate the soil behavior type index,
so using a Robertson equation, and then categorized
into the silt sands, sand mixtures, clays,
and organic soils.
From that data, a soil model was created, a 3D soil model,
which was done under the geological modeling section
in Leapfrog Works, which is the categorical modeling.
I can just show you that on the scene now,
switch those volumes on,
and you can see the different layers within the model
will go into section.
We can see from the colors,
we’ve got a number of soil types in here.
The vast majority are is the sand, sand mixtures,
which is in the orange.
We’ve got some silt mixtures,
which are the green, and the horizon of interest,
or the intervals of interest in this small exercise
are the clay layers, which are represented by the gray.
You can see that the 3D model is a simplification
of the logged data we have.
In this first step, we will look at compositing the data.
So I want check first what the intervals are on this data.
So I shall right click on the CPT table
at the top here and statistics. table of statistics.
So you can select everything if you wanted to do some checks
on your input data.
But the one that I’m interested here really
is the interval length.
So, I can see here I’ve got a mean of 0.02.
I have a minimum from 0.001 all the way up to 50.
Now, I have a suspicion that this 50 maximum of 50 meters
is probably because of those boreholes
that don’t have any CPT data.
So first of all, I want to exclude those boreholes
and then re-look at the statistics.
So I’ve already created a query filter on the table there,
which just excludes any holes that don’t have any CPT data.
So I’m going to apply it to my statistics here.
And now I can see that the table is refreshed
and my maximum and minimum are both 0.02.
So that’s two centimeters.
So all the samples, or all the intervals from the CPT data
are at two centimeters.
Because two centimeters is very small,
I want to composite it slightly
before I take it into the estimate.
The borehole spacing is anything
between 500, 700 meters apart
up to two and a half kilometers.
So the depth of the bore holes are around 50 meters,
and then the measured into of two centimeters is very small
comparatively to the spacing of the boreholes.
So under the composites folder,
which you will see if I expand this here,
under the borehole data, there is a composites folder.
I’ve created a numeric composite.
So, just take you into that to see it
to show you what I’ve done and the output columns,
I’ve pulled everything across.
And rather than also pulling across the soil behavior type,
I’m going to recalculate that on the composited data.
And the composite length I’ve chosen is 10 centimeters.
So it’s just slightly increasing the length
from two centimeters to 10 centimeters.
I don’t want to composite it too drastically
and lose all the variability.
I still want to maintain some variability.
So you can still do calculations on the composite table.
And I show you the calculations now.
You can copy the calculations from another item.
So I’ve copied the two calculations,
the soil behavior type index and category
from the original table.
And then I’ve created a third column here
by creating a numeric calculation.
And this is going to be our indicators.
I’ve created an if statement
where the category equals clay is given a value of one,
and then everything else is a value of zero.
And this column will be our input into the estimation.
In the next step,
we’ll be setting up the estimation parameters.
In the project tree down towards the bottom,
there is a folder contaminant models.
And when you expand that, there are two sub folders,
estimation and block models.
Under the estimation folder will be where the parameters sit
for each domain that we estimate into.
And in this case, we’re only estimating in one domain.
That is basically the volume
that we will be estimating into.
So you would right click new contaminant estimation
on the estimation folder.
I’ll just double click to open the one I’ve already created.
So there are two inputs required here,
the domain definition, so that’s the actual volume
we’re going to estimate into.
So I have a volume that I’m going to select
as the boundary from the geological model,
or basically our soils model that has already been created
in this project.
And the values will be the calculated indicators created
under the compositing table.
Make sure the transform type is set to none.
I think by default it’s, would be log, but we want none.
We don’t want it transformed.
On the top on the right hand side here,
you can have a quick check on the values.
Around 21.5% are ones, that’s our clay,
and everything else is set to zeros.
So you can just double check
that you have selected the correct file at this stage.
And you press OK.
It will then create these sub objects
underneath that top level here.
So we’ve got the domain and the value.
So these are the two inputs we just selected.
Then we have three folders.
The spatial models is the spatial model
that will be used in the estimation.
So that’s going to be the variogram model.
I’ll double click into this.
So I’ve already modeled this variogram model.
I just need to refresh this.
If I come to the axis aligned,
you will see that in the z direction or the minor access,
we have a very well-defined experimental variogram.
So this was very easy to model.
In the semi major and the major axes,
not so easy to model,
because of the very, very wide spacing of the data,
but you just need to do the best you can.
Then you will an estimator itself.
So under the estimators, if you right click,
you will see the various options there for estimation.
We’re going to do a kriging estimator.
So create one of those.
And I shall double click to go into the estimator.
So the tabs that we’re interested in here,
the discretization I’ve changed to 552.
So this is for the block discretization.
Make sure that it’s selecting
the variogram model you’ve made.
So if you’ve made more than one,
just make sure it is selecting the correct one.
Ellipsoid range I have set to 1,500 meters
in the maximum and intermediate directions.
So that’s sort of in the x and y really.
And then in the minimum or the vertical,
I have set to three meters.
So it’s quite constrained in the z
compared to the two horizontal directions.
Under the search definition,
I’ve used a minimum of seven samples
and a maximum 12.
Again, quite constrained.
And then for the outputs,
you can just select what you would like written
onto the block model.
Now we can go ahead and create the block model.
So the block model will be
underneath the block models folder here.
And you only have one option here, new block model.
So I’m just going to open the one I’ve created already.
Just put that to the side and we’ll look down on our data.
So you can see the outline of the block model
that I have made here.
This is sitting on top of the soil model,
the extents of the soil model,
and our borehole data as well you can see.
So, with the extents of this block model,
you can see that I’ve just hugged it
around this Northern part of the data.
And I’ve also rotated the block model
to sort of be in line with the general trend of the data.
The block size I’ve given is 500 by 500 by one meter.
Could see the grid of the blocks in the scene here.
And just make sure when you’re doing the extents
in the z that it is comfortably around the data,
not taking in too much dead space,
which can somewhat times happen.
So the extents are controlled
by just moving these arrows here.
It’s very easy.
You can just type them in as well.
The evaluations types,
this is everything that we want written
onto the block model.
So you can see that I’ve got quite a few things
on here at the moment,
but the very least you’re going to want is the indicator
or the estimator that you’ve set up already.
And if you’ve made more than one,
just bring all the ones on
that you want to have on the block model.
I’ve also brought on the soil model, the geological model.
So I can have that coded onto the block model as well.
When the block model has run,
drag it into the scene to view.
I’ve already created a discreet legend
to view the block model by.
So it’s just going from zero to one in 0.1 increments.
You can see that there’s some variability
in the blocks and the estimated values in the blocks here.
So it looks like nothing major has gone wrong,
or at least it’s looking reasonably correct at this stage.
So it’s best then to have a look in section,
because that’s what we’re most interested in seeing,
the soil profile and the estimated model.
So I’m just going to cut it to section here,
shift out to look perpendicular to your section.
So I’m just going to hone in on this particular borehole.
We can see we have a very distinct clay layer here,
and then it’s a bit more diffused in certain areas.
And we’ve got clearly distinct areas where there is no clay.
So if I select on a block,
say, sitting right in the middle here of the clay,
we can see it’s got the clay IK.
The probability is one
whereas if I click in a purely purple area,
the probability of clay’s zero.
So clearly what we’re most interested in
is where we have areas where we have a mixture of the two
and perhaps our geological model hasn’t picked up
or hasn’t modeled this as clay.
So if I click on this zone,
we can see we have a clay percentage or probability of 0.25
and here as well at also 0.25.
So now we can see we have probability associated with this.
Whereas in our 3D model, I then make that quite transparent.
So make that…
We can see that our model, our interpretation,
had this zone and this zone,
and these zones were not included in the clay.
So clearly when we’re designing,
or if you’re going to design a monopile
to intersect this area,
there are areas where there’s a higher probability of clay
than you would actually end up modeling.
When you have validated the model,
the next step is to quantify the results.
In order to have meaningful results,
you will most likely need to create some calculations
and filters on your block model.
So that’s done through the calculations and filters
on the block model.
You can see here
I have created two category calculations already.
So the category calculations, when you create it,
make sure you select the category calculation.
And I’ve created one for probability in steps of 0.1.
And then I’ve also done one for depths,
so I can report by depth intervals.
For the filters, I’ve created two filters here.
One is around all of the boreholes,
and that’s just to isolate the blocks
where they’re closest to the borehole.
And then the second one is similar thing
around one particular borehole.
To do this as well, I had to create a numeric model
using the new distance function.
Just use the borehole traces
to do the distance function from,
and then apply any filters
if you want to isolate it around certain drill holes.
Then you just evaluate that onto your block model
in the same way as you evaluate the estimator
and the geological model.
When you create a report,
you need to have a category on your block model
and use one of the categories or more.
You can add more than one category to report,
and then obviously a numeric value.
So you do that when you select the columns.
Just note here that the density,
the default is quite high for soils,
or it would be too high for soils.
It’s more for solid rock.
It’s something like 2.57.
So you’d need to change that down
unless you’ve actually estimated.
In which case, you can just select it there.
And this report is just a very basic one.
I’ve just used the soil model as the category.
And then I’m just getting the average indicator,
clay indicator for each horizon if it’s present.
The next report here, I’ve used probability as a category,
and that’s using that calculated field.
Now, you may have noticed
that I didn’t actually create a calculated field
for the less than 0.5.
I’ve done this within the report.
So you can group your categories within for your reporting.
So for example, maybe there’s a threshold
where you want to check the volume and the mass associated
where there’s a higher probability
of 0.5 of clay being present.
Then you can just group the categories to get that
and report that.
So I can also switch these other things off in my report,
these other categories,
’cause otherwise the report itself
becomes a little bit too cluttered
and doesn’t necessarily make sense.
The third report I’ve used the depth.
So I’m interested in just the sand mixtures,
and I’m looking at the probability of the clay on average
in these depth increments.
So why do I want to do that?
So that’s so I can see where there’s hight probabilities
of clay being present.
So I can see there’s a blip there
and also a little bit further down there.
Depth on that allows me then to go back
to that borehole and interrogate the model
and recheck the layers and take that into consideration
when I’m designing the monopile.
These filters and calculations that I’ve used for reporting
are also useful for use in the scene
if you want to filter your data in the scene.
So I can see here that’s the one around that borehole.
And also if you want to export your model, your block model,
you can use those filters for that as well.
In summary, we started with the exploratory data analysis
where we looked at the statistics of the intervals.
We composited the data, and then created an indicator field
for the clay.
We then set up the estimation parameters for the kriging.
First you create the object with the two inputs
of the volume and the values.
Then you do the variogram modeling
followed by setting up the estimator itself
with the estimation parameters.
Moving onto the block modeling,
first with the structure of the block model
with the block sizes and the extents,
and then evaluating on the estimator we had created
along with the geological model or the soil model.
And onto validation of the model.
That’s where you are checking the estimate
compared to the input data.
The last step takes us to the point of the exercise,
which is to quantify the material where clay poses a risk
in the design of a monopile.