Skip to main content

Sarah Conolly looks at exploratory data analysis and shares tricks using our interactive histogram to help with refining geological models and estimation domain, using Leapfrog Geo.



Sarah Conolly
Senior Project Geologist – Seequent


9 min

See more on demand videos


Find out more about Seequent's mining solution

Learn more

Video Transcript

<v ->Okay, hi there everyone.</v>

As Ryan just mentioned, my name is Sarah Conolly

and I’m part of the technical team

here in Vancouver for Seequent.

And today I’m going to be talking you though

my tips and tricks and they are called

“When your data speaks volumes”.

I gave it pretty vague title for this one.

If any of you know me from Geo,

that could mean pretty much anything in there.

But what we’re going to do today is

take a look a little bit further down

that mining value chain, so.

Jeff and Anna were, kind of earlier on,

Jeff found this bit for us, thanks Jeff.

Anna cleaned up my data,

and then I’ve actually got a model

that has been built in Leapfrog Geo.

I’m going to be moving closer towards my resource evaluation

and I’m actually going to be focusing on my EDA,

so my exploratory data analysis.

Hopefully to show you a few little tricks

using our interactive histograms to maybe think about how

you could further refine your geological model

and your estimation domains.

So I’m going to jump into Geo.

Excellent. Okay.

So, here we’re in the same deposit

that Anna and Jeff were looking at.

We’ve got two massive sulfide lenses here.

It’s a multi commodity deposit.

So we’ve got copper, zinc, lead, silver and gold.

My two massive sulfide lenses here

have been built from the geology.

So we’ve taken a look at our login information,

we’ve found those massive sulfide in stalls,

and we’ve created our volume.

I am going to look to one of those volumes

and just focus on this massive sulfide domain.

So here I’ve got that volume of interest.

I’ve got my data loaded and these,

I’m looking at my zinc assays.

But, I don’t understand the kind of statistics

that lie behind this and whether or not I’m happy

for my volume of interest to move forward

into my resource estimation.

So I’m going to take a look at a histogram of this data.

Just make this a little bit smaller,

so you can see them both. Okay.

So I’ve done a few things before we got here today.

I’ve merged my assay data with my geological information

and I’ve created a filter so I can sort them

looking at my values within this volume.

And I guess when I was first kind of started working

on grade control on models.

The first thing that I was taught

about my estimations of anything is that

you’ve got to have consistent geology.

I already told you that we have modeled this

from the login.

Obviously we validated that on like two photos

and I’m happy with that side of it.

Number two, that you’ve got one search orientation.

You could maybe argue that

there’s a little bit of folding in here,

but for today we’re going to go with, yep.

We’ve got that major twinge to our all body view.

Number three, that we have a single population

within that estimation domain.

And if you take a look at my histogram on the right here,

hopefully you can see we’ve got that

kind of classic bell curve that we’re after.

I’m fairly happy that I have one population.

Maybe take a look on the right hand side of my histogram.

You can see that around 25% of it

we’re actually getting a bit of a breakdown

in that population, and we’ve got a few outliers.

How do I find those outliers? And what do I do with them?

Do they cluster? Are they scattered?

Do I need to rethink my estimation domain?

Now I can build some filters to kind of try and find

where that data is.

However, in Leapfrog in the statistics tool,

all you need to do is click on your histogram

and those samples will appear in the scene.

So now, if I kind of spin around with my volume,

you can see those high grade outliers

are actually scattered through my domain.

And I’m probably more likely to deal with them with a

top cut or maybe some restrictions during my estimation.

So the zinc I’m pretty happy with to move forward.

If we take a little look at the silver distribution,

maybe we’ll see a different story.

And depending on what view you look at this, these assays.

The warm colors are representing the higher grade.

The cooler colors are a lower grade.

And hopefully you are seeing

that we have a fairly sharp contact

through the middle here.

But let’s have a look at the statistics behind this as well.

So I’m going to look at my silver histogram.

And unsurprisingly, it looks like we’ve got

a bi-modal distribution.

So instead of a perfect kind of bell curve

we’ve got a peak in the lower grades,

and then another peak in the second grades.

If we were to move forward with estimation

do these samples really relate to each other? Maybe not.

Maybe we’d want to sub domain this average

to separate units.

So again, these interactive histograms

can be really handy to do that.

So if I highlight the low grades, they gave me this scene.

If I highlight the high grades,

you get those intervals instead.

Which is great.

What’s the next step?

We want to split them out

and actually model them as separate units.

So if we layer up this interactive histogram,

with our interval selection tool.

We can then create a refined model

of our massive sulfide lens.

So I’m just going to create a new interval selection.

My silver domains.

And again, I’m just going to re highlight the low grade

samples and I’ll display them so you can see them.

And then I just use my select all visible intervals,

little button.

And I’m going to assign these to a new category.

So I’ll call this silver low grade.

Let’s go back to that interactive histogram.

Select the high grade.

Select all those little intervals.

And we’ll categorize these as our high grade.

So I’m done with my histogram now.

I’ll shut that down and save my interval selections.

And we can display them in the scene.

So now, rather than numeric data,

I’ve re-categorized that into my low grade samples

and my high grade samples.

And the next thing that I’m going to do

is actually use these to build another surface

in the middle of my volume of interest.

So if we go down to our geological models folder,

we can create what are called refined models.

And what they do, is that they’re basically going to take

the volume from the first pass model,

and then we can refine it like (undistinguishable)

I’m taking this massive sulfide lens

from my geology model.

I’m going to look to my new intervals

and create a new volume based off

the selections that I just created.

As I’m presenting I will create my volumes.

To my surfaces I’ll just hide those drill hole intervals.

So now we can see that within the volume of interest

I’m splitting it by those two different domains I created.

And as a final result,

I could move forward with the two separate volumes

on to my resource estimation.

So that was my little tips and tricks for today.

I was using my data to kind of build my confidence

in my output volumes to move on to estimation

at the end of the day.

Thanks for listening.

(crowd clapping)