Skip to main content

Lyceum 2021 | Together Towards Tomorrow

Knowing the subsurface conditions is critical for successful geotechnical analysis and design.

However, geomechanics properties are inherently variable and difficult to obtain, resulting in uncertainty. How then can we properly understand geotechnical risk? This session presents a means to characterize risk as a function of the relationship between uncertainty – provides divisions to the continuum of tools available to geoscience and geological engineering practitioners to address uncertainty in risk assessment – and proposes a framework that matches tools to risk character in order to improve risk assessment outcomes.



Ray Yost
Principal Geotechnical Engineer – Advisian

Chris Kelln
Director of Geotechnical Analysis, GeoStudio – Seequent


30 min

See more Lyceum content

Lyceum 2021

Find out more about Seequent's mining solution

Learn more

Video transcript

(upbeat music)

<v ->Hello and welcome to this presentation</v>

on understanding geotechnical risk.

My name is Chris Kelin,

I’m the director of Geotechnical Analysis

for the GeoStudio business unit here at Seequent.

And I have the pleasure of introducing our speaker,

Dr. Ray Yost.

Ray has nearly 20 years of experience

working in the fields of geology, hydrogeology,

and geotechnical engineering

for the civil and mining sectors.

His career has included tenures

at Oregon Department of Transportation,

Rio Tinto minerals, Teck Resources

and more recently, as a principal geotechnical engineer

at Advisian.

In this role,

Ray serves as a subject matter expert

for a wide range of engineering applications,

including underground mining,

surface mine design, tailing storage facilities,

geo-hazard management, and much more.

Today Ray will talk to us

about a framework for understanding risk

in geotechnical engineering.

Ray, over to you.

<v ->Thank you, Chris.</v>

So my talk is about understanding geotechnical risk

and the corresponding uncertainty we often face.

It’s a structure for understanding uncertainty.

The next slide, please.

It starts us with this idea

that small data sets and the corresponding uncertainty

that comes with them

are a common circumstance in geological engineering.

And by small data sets,

I mean, either actual the small number of values

that we might have,

or small in a sense of a sample to volume ratio.

We’re trying to characterize a very large volume of ground

with a very small number of data points.

And it creates two problems these small data sets

in understanding risk.

The first is pretty immediate.

I mean, we have an analysis to do,

we only have a few data points to choose from,

and we have to pick an appropriate point

that we think represents the ground conditions

or wherever else we’re trying to characterize.

The second problem is less immediate,

but ultimately it’s a lot more important.

And it’s the focus of this talk really.

Because in selecting this value,

what we’re doing is we’re making some assumptions

about that range of data.

And that goes into our analysis,

and that goes into our risk quantification.

And ultimately that goes to our resource allocation

that we have for mitigating that risk.

And we have this now line

between the inherent uncertainty that we’re dealing with

from these small datasets,

all the way to the end,

where we’re actually allocating resources

to mitigating that problem.

So it’s really important to understand

how we think about uncertainty

so that when we get to this resource allocation,

we’re actually applying optimal levels

of mitigation to a problem.

Next slide.

So one of the things

its not that when we say uncertainty,

it’s not just this big black box of unknowns, this void.

One of the advantages we have in geomechanics

is that a lot of our data sets,

or rather, the types of data and information we use

are fairly quantitative.

And so, because of that,

we can develop this relationship

between the little things that we know

that the small data set that we have,

and this larger uncertainty

about what the possible range could be.

We have this idea that variation

is the range of what we know, whatever that range is.

And the uncertainty is what we don’t know.

And given that it’s quantitative often

we can have an open door or closed end to that uncertainty.

A lot of times the minimum value is often zero.

The other end, it can be open in certain circumstances,

Q values, compressive strength, things like that.

But at a certain point, it doesn’t matter anymore.

Once you get past a certain data point

or restraint value or whatever,

it doesn’t matter if it’s 350 MPa or 325 MPa,

it’s strong enough, basically.

So when we start to overlay these two,

we see that this is a useful building block now

for understanding risk,

because we have this chunk of certainty or knowns

in the middle,

and then a chunk of a uncertainty around the sides.

Go to the next slide, please.

It’s a simple diagram,

and it’s going to be the basis for what I’m talking about

with respect to risk.

But I’ve started off right away

with this very idealistic version of what this looks like.

I’ve got this range of variation that we know in the middle

bounded by this equal ranges of uncertainty on either end.

Chances are going to be a lot better actually

that there’s an asymmetry involved.

Either there’s going to be a lot more uncertainty

on one end or the other.

And this is going to, again,

influence how we think about risk

as a function of uncertainty.

Now I’ve talked about this being quantitative information.

So it’s easy to think about this

in terms of a number line and zero at the left side

and whatever the maximum value is at the right side.

And that’s okay to think about it that way.

Since we’re talking about risk though,

and sometimes low values can be lower risk

or high values can be lower risk.

It’s best not to think about it necessarily as numbers

just as relative better or worse

in terms of where this certainty lies.

There’s also a possibility where it could be gapped.

We could have some sort of chunk of what we know

and the uncertainty, another chunk of what we know again,

and then uncertainty on either side of that.

For the purposes of this talk

and just to simplify matters a little bit,

I’m going treat this as basically a bi-modal variation.

And we just have the same sort of circumstance.

There’s a range of certainty that we know about or we know,

and then a range of uncertainty on either side of it.

Next slide please.

So now we want to talk about upside or downside asymmetry.

So I’ve talked about this idea

that we can have significantly

more uncertainty on one side or the other

of our range of what we know.

And to do that,

we want to think about this critical value,

this concept of a critical value.

This is the value at which

if you have an input value,

you’re going to get an output value.

And if you put in a lower input value,

you will get a worse answer.

Or anything to the left of that will be worse.

Anything to the right is better.

So this is the value.

I mean, probably the easiest way to think about it

is, say a stability analysis,

and you need a certain compressive strength

to produce a factor of safety.

So if you have a less compressive strength

or a lower compressive strength,

you’ll get a worse factor of safety,

and a higher compressive strength

is a better factor of safety.

So it’s this critical value.

And now we can start to see where does our variation lie

and versus where does our uncertainty lie

relative to this critical value?

So we can have asymmetric downside risk,

we’re basically what we don’t know makes the problem worse,

or asymmetric upside risk,

what we don’t know makes the problem better

relative to this critical value.

Next slide, please.

Now we want to talk about magnitude of uncertainty.

How big is this range?

We can have of course,

significant downside uncertainty

in the case that I’ve shown.

There’s a lot of uncertainty below this critical value,

or we can have minor downside uncertainty.

There’s just a little bit.

Again, if we think about

a lot of different geomechanical data,

the minimum value is zero.

So if the far lowest known point is slightly more than zero,

yeah, there’s some uncertainty,

maybe there’s a value that would fit into that range,

but it’s a pretty small range

between zero and whatever our minimum value is.

So again, for the purposes of this talk

and to keep things simple,

if we have minor downside uncertainty,

I’m not even going to think about that as uncertainty,

it’s just treated with extending your variation

a little bit more.

Really the purpose of this

and talking about risk and uncertainty

is talking about circumstances

where we have significant

either downside or upside uncertainty,

where we have a lot of unknowns on one side or the other

of that critical value.

Next slide, please.

Now, of course,

there’s a sensitivity too that we have to consider.

This is how sensitive is the output value

to a change in the input value.

We can have circumstances

where our output is insensitive and reasonably linear

as we make these changes and gradual changes

in putting in a higher or lower values

relative to this critical value,

we don’t see much change or an answer,

or we can have very sensitive

and potentially non-linear answers relative to inputs.

We can start to see

that we either get a significant change

in the slope of that output,

or we have just a very significant sensitivity

at the end of the day.

So either one is a cause for concern in this case.

Next slide, please.

And then of course, risk.

We have all of the different things around the probability

and the range of inputs,

and essentially what that value is going to look like.

And the other half of risk is the of course, consequences.

And our consequences can like sensitivity

be low to moderate.

As we’re changing that input value,

we don’t really see a change in the consequence that much.

So say again factor of safety in a stability analysis

is our example.

And we’re reducing that input material strength,

and we’re getting a failure.

But the size of the failure

is not really changing,

the run-out distance isn’t changing.

We’re not really seeing huge differences

in the consequences of that,

even though the factor of safety might be dropping,

it’s not really having an effect

on what the impacts of that would be.

So we can have this lower,

and again, linear consequences

as we go down this potential range of uncertainty,

or we can have very high consequences,

and even non-linear consequences again.

Now one note on the consequences,

we have both downside consequences.

These are often going to take

the form of unmitigated liability.

And why I say liability instead of risk,

is that it’s sort of the next piece.

Things could be worse than we assume,

higher risks, and then these risks

haven’t been attenuated or mitigated

because we aren’t aware of them,

and that’s going to create a liability.

So that’s the downside consequences,

this unmitigated liability.

And the upside consequences are going to be more

in the form of opportunity costs.

Essentially we could have had a leaner, meaner construction

of whatever sort.

We didn’t have to have a to have slope angle that was that shallow.

We didn’t have to have an embankment that was that big.

We dedicated resources to something

that we didn’t need to necessarily

to achieve our desired outcome in terms of safety.

Next slide, please.

So given this construct with sensitivity,

greater or lesser than,

the asymmetry in the outcome upside or downside,

and then the consequences either higher or lower.

We have this box of possibilities

in terms of these risks scenarios now,

and uncertainty scenarios

that we’ve got eight different circumstances

that we can look at

in terms of all of these different ways

we can think about risk as a function of uncertainty.

Next slide, please.

So we’ve talked about now

the first two pieces of that flow diagram

that I showed in the earlier slide

with uncertainty and assumptions.

That’s how we start to think about risk.

Now we talk about the analysis and the risk mitigation,

and this is through the tools that we use.

These are all these different tools

that are available to us as geotechnical engineers

to address this uncertainty.

How do we think about uncertainty?

I won’t say that this is the definitive way

to think about these tools,

but I will say that there is a continuum of sorts

between all these different tools that we have.

And this slide isn’t meant to capture all the tools

that are available to us as geotechnical engineers.

But to talk about them in terms of these broad categories,

where we have tools

that are based in inductive reasoning and inference,

these are things like the first picture

where we have something about A that we don’t know.

This could be again, a material strength.

We know a lot about A, or something about A,

we tend to know a lot more about B

including that material strength

this target thing we want to know.

And A and B share enough characteristics

that we can assume that whatever material strength B has,

A has the same strength or similar strength.

We have tools that fall into this

sort of proportional relationship.

A is somehow relative to B,

think about, we have a few compressive strength samples

where we’ve incurred the higher cost

to do the compressive strength testing

and a whole lot of point load samples.

And there’s a proportionality between those two.

So we can look at the range

of compressive strength variation

as a function of the range of point load strength variation.

There’s a lot more, of course,

again, these are not meant to be exhaustive lists

of all these different tools,

just to get the sense of what an inference

or inductive reasoning type of tool looks like.

We have parametric tools, or these are basically

this one of the kitchen-sink approach.

We’re throwing a lot of different things

in sampling from them into a bin.

This can be a lot of different types of variables.

And we’re trying to come up

with some sort of parametric analysis

based on Monte Carlo, Latin Hypercube sampling, whatever

that produces this range of outcomes.

And we can start to look at that range of outcomes

and make some conclusion from that.

There’s a lot of things around say subjective probability

that might fall into this as well.

A lot of different tools

where we’re basically just looking at

what the distributions are

across a lot of different ranges of our variables.

And then we have these direct

or deductive reasoning types of tools

where we’re just either looking at

what information we have, this is the variation,

or maybe we’re extrapolating from something.

A lot of times frequency or recurrence interval

might fall into these types of deductive reasoning tools.

We have a bunch of data from a time history,

and we’re going to extrapolate that out a little bit

and pretty much this is what we can assume

about the circumstance from the information we have.

We could also assume some cases, just the minimum value.

If we know that the range is bound in a certain way,

it starts at zero, it goes to a hundred,

maybe we pick one or the other

as far as an upper or lower bound to what that would be.

So these things as well,

I’m going to argue, fall on a continuum.

There’s not any necessarily hard lines between them but–

Next slide, please.

We will talk about the relative strengths

and weaknesses of them.

And again, not meant to be an exhaustive list

just to illustrate that each of these has a place and a use

in terms of addressing uncertainty.

With inference and inductive reasoning,

a lot of times we’re using a lot of our knowledge

and understanding as geotechnical engineers

to relate one thing to another,

or use some other bit of data to modify

or increase the precision of our estimate.

We could say, if we don’t know a material strength,

we can assume it’s zero, that’s pretty conservative.

We don’t want to do that necessarily,

so we’re using inference

to increase the precision of that estimate.

Of course, the weakness of this,

it it’s really based on knowledge

from practitioner to practitioner, that can vary.

I might be really good at estimating material strength

from all these other materials strengths that I’m aware of,

the next person has maybe more of a limited expertise

in that area.

And you’re going to get very different answers

from inference and inductive reasoning

from practitioner to practitioner,

probably the basis of a lot of arguments

that we have as geological engineers.

The direct or deductive reasoning,

the strength there is that

since you’re assuming either from what you know,

or from more importantly from some end value of this,

you sort of covered all the bases.

You’re not going to be surprised

by something that wasn’t captured in your assumption.

The weakness of course,

is that these can be fairly conservative estimates.

With the parametric tools,

the strength is that it’s actually

kind of drawing on the strengths

of both inference and inductive reasoning

as well as direct and deductive reasoning.

And so it’s pulling the best of each of those.

The weakness is that

this can require considerable time expertise.

You would have to pull from a lot of different people.

You’re going to have to deal

with some of those issues around,

again, both of the weaknesses of each method.

The other one that it can cause

is that you end up with this range of possible outcomes

that’s going to vary

from some extreme adverse outcome to extreme good outcome,

and you’re going to have to make some decisions

around which one’s going to be the appropriate outcome.

How do you decide?

Do you have a cluster of outcomes around a central value?

And that’s a good thing.

Or do you have these long tails

that you have to make some decisions about?

It can sort of solve some of the problems

of inference or deductive reasoning on the front end,

but cause more problems on the backend.

So no one tool is perfect,

but they all have their advantages and disadvantages.

So next slide.

So now we’ve compartmentalized

all these different circumstances of uncertainty and risk,

and now we have the different tools that we apply.

And we’re going to start talking about

how each of those tools fits

each of these different circumstances.

So we have that box of possibilities

from the previous slide,

and we split it,

and we’re looking at the downside on the left side,

and the upside on the right side.

We can start to look at how these tools

apply to these different circumstances

as a function of sensitivity and consequence,

and then upside or downside risk.

So I’d like to talk about these for a little bit,

and I’ll start with the downside risk

in the lower left-hand corner.

We have a situation where we have pretty low consequences,

pretty low sensitivity, or in sensitivity,

it’s downside risk, but essentially you’re not going to,

because of this insensitivity and lower consequences,

you can assume fairly extreme values

without really any cost in terms of allocation of resources.

So that’s a pretty good place for that tool to sit.

In this middle band,

we have either the higher consequences

with the higher sensitivity,

(coughing) excuse me.

Some of these inductive tools are going to be more important

because now either we have to think about consequences

or we have to think about that sensitivity.

We do want a little bit more precision

in how we approach this.

We want to be aware of implausible or extreme values

and how those might affect our answer,

but we don’t want to let them influence our answer too much

because they could lead to such an extreme outcome

in our risk assessment

that we’re, again, misallocating resources.

So we want to start using some of these inference tools

to increase the precision of our input assumptions.

And then finally, when we get to the upper right side

on the upper right quadrant on the downside risk,

it really speaks to,

we have a lot of sensitivity, high consequences.

We need to look at this parametric approach

because we want to capture potentially some relationships

between different, either within a variable

or due to non-linear responses,

or maybe some combination of variables

that may not be intuitive.

We really want to see what that full range of outcomes

looks like.

So for the upside,

we have a similar set of different tools

that are going to be applied to these different compartments,

but a slight difference.

If we start in the lower right-hand corner of this time,

we have low sensitivity, low consequences,

but because it’s more of a matter of opportunity costs,

we want to use this parametric approach

to understand those a little bit better.

There’s some value in looking at those

so that we’re understanding

that we’re again, allocating resources appropriately.

For these middle two boxes

where we have the higher consequences, but less sensitivity,

these again, these indirect and inductive reasoning methods

are important to increase that precision around our answer.

But once we get into the lower consequences

with greater sensitivity,

the direct and deductive approaches

are more important to use.

And of course, when we get up

into the upper left quadrant there

with greater sensitivity and higher consequences,

we want to use those parametric approaches

again, to understand

if there’s some sort of non-intuitive outcome

that we can experience,

or to look at whether those outcomes are clustered

around some sort of central value

or have these longer tails

that might be important to consider.

Again, it speaks to

how do we start to look at the tools

versus the circumstance

to properly mitigate risk

or allocate resources to mitigate risk.

Next slide, please.

So where do we come to with all this?

We can use this relationship between what we do know

and the range of what we may not know or don’t know

to think about how to characterize uncertainty

relative to risk.

And you can agree or disagree

with any parts of this discussion

or any parts of my presentation.

What it does come down to

again, is this fundamental idea

that we can discuss this and go on and on

and talk about our different approaches and whatnot.

But we do have to think at the end of the day

about that allocation of resources.

And so the purpose of all this

is just to highlight

that there is this structure to uncertainty.

There are impacts that the tools

that we use as geological engineers have

to how we think about that.

And when we start to marry those two

and look at the circumstances of uncertainty

and the tools that we have for addressing it,

we really want to make sure that that’s a good marriage

in terms of producing this optimal allocation of resources

at the end of the day.

And that’s really the message of this entire talk.

Next slide, please.

Thank you for your time and attention.

(upbeat music)