Skip to main content

Zoe Reid Lindroos, Senior Geologist at Seequent, looks at imported drillhole data, new columns and merged tables created within Leapfrog, understanding how all of these options can help optimise your workflows.

Zoe will present workflows and go over precautions to consider when setting up your drillhole database in Leapfrog before you commence modelling. Drillhole interval data can be imported into Leapfrog via .csv files, the Central dataroom, the ODBC Link, or the aQuire Link. Downhole points and structures associated with drillholes can also be imported. Once the drillhole data is within Leapfrog there are many options available for interpreting this data via new columns or tables.



Zoe Reid Lindroos
Senior Geologist РSeequent


45 min

See more on demand videos


Find out more about Seequent's mining solution

Learn more

Video Transcript

<v Zoe>Welcome to today’s webinar</v>

on Drillhole Data Tips in Leapfrog.

My name is Zoe and I’m a senior project geologist

with Seequent here in Perth.

There is a lot of flexibility

when preparing data for modeling in Leapfrog,

and there’s no right or wrong way to do this.

What we will present here are some options

and precautions to consider

before starting the modeling process

to ensure you get the best workflow set up

from the beginning.

This technical session will cover

taking a look at importing drillhole data.

This includes interval data such as logging and assay data,

downhole points, and downhole structural measurement.

We’ll look at validating that data within Leapfrog,

and I’ll point you towards some good resources we have

about how to deal with error handling.

We’ll look at how to create new columns within Leapfrog

from that important data, and how to set up merge tables.

And finally, once we’ve completed all of these steps,

how to choose a base lithology column

for a geological or domain model,

and how to apply a query filter to this model,

if appropriate.

The first step we’ll look at is importing drillhole data.

Leapfrog Geo can import drillhole data

from files stored on your computer or network location,

from a central projects data room, if you have central,

from any database that runs an ODBC interface,

or from an acQuire database.

For any of these options, once the data source is selected,

the process of importing drillhole data is very similar.

To start, right click on the drillhole data folder

and select import drillholes.

And we’ll navigate to the folder location in this case.

Leapfrog is expecting a minimum of a collar table,

survey table, and at least one interval table.

The survey assay and lithology tables

will be automatically selected

if they’re named something sensible.

We can add additional tables here at this stage

or remove tables if you don’t want to import them,

and we can also add downhole points

at the stage of the import.

Downhole structures will need to be imported

once these drillholes are in the project.

So we’ll click import and check the auto mapping

or manually map the columns for each of these tables,

and click next once you’re happy with each table set up.

The table that you’re working on will be in bold

up in the progress bar at the top here.

We won’t go through each column individually

in this exercise, but I’ll highlight the import as options

as we step through.

So the collar table, we’ll select next

and same with the survey table,

these columns have been automatically mapped.

With the assay table, the hole ID from and to

have been automatically selected,

but the gold and arsenic values haven’t been selected yet.

This is where the import as options

are important to fully understand.

So we’ll step through those.

Where it says not imported at the top here,

we can drop down and choose how to import this column.

The lithology and categories are used to import

any categorical data such as lith codes,

weathering, and alteration,

and the functionality is equivalent in Leapfrog.

Text is used to import summary or description columns,

and columns imported as text cannot be used for modeling.

So their use is limited within the project.

I generally only bring in free text fields

such as logging comments.

Numeric is used to import any numeric data

such as assays, temperature, and drop quality designations.

Timestamp and date are used to import

time and date information in custom formats

that are supported with these.

The URL option is used to import an associated URL

and can be used to connect core photos,

which is stored in a directory to the correct drillholes.

These column types cannot be changed after import,

so choose carefully.

However, in the latest version of Leapfrog,

imported columns can be deleted and re-imported

if you do happen to bring them in incorrectly,

and then they can be renamed also

if you need to remap to a different column here.

So let’s go through and map the columns

for these data tables.

So the gold and arsenic values will come in as numeric.

Their lithology will come in as lithology.

We’re now looking at our points file.

So we’ve got a hole ID, depth,

and the structure type that’s been logged downhole,

and that structure type can come in as a category.

I’ll click finish, and these will come into the project.

Once the drillholes have imported,

they will appear in the project tree

under drillholes object.

Now that they’re in the project,

we can also bring in downhole structural measurements.

These are imported by right clicking on the drillholes,

choosing import from file, and plano-structural data.

I’ll select my structure file here.

And you can see we’ve got hole ID, depths.

We’ve got dip and azimuth as well as alpha and beta.

So you only need either or not both.

I’ll bring in the alpha and beta measurements

and Leapfrog will automatically convert to dip

and dip direction.

I’ve also got a category column here for structure type.

So I’ll bring that in associated

with each of those structures and click finish.

Now that we have the data in our project,

we can move on to validating our drillholes.

There is some things we should consider

when importing data into Leapfrog.

This isn’t a comprehensive list,

but is a good starting point to make sure that

we’re getting the information that we need for our model.

Do we have any missing assay values

and how do we want to handle these?

We’ll talk a little bit later

about the implications of replacement rules.

Are there any values that are outside

the expected range of data, for example, percentages

or RQD greater than 100?

Is the spatial position of data correct?

Are survey files validated and up to date?

Are there any duplicate points or overlapping intervals

within our data set?

Some other things to check in the database

that may not be picked up by Leapfrog

or the software that you’re using are listed here below.

So these are things like does the database contain

all of the programs drilled?

Have all relevant assays been returned?

Were any QAQC issues raised with the recent drilling?

If there are any missing data intervals, can they be found?

Does the database contain all of the surface,

underground, and face-chip data in the same database?

Has the collar table were being checked for unsurveyed holes

or holes that haven’t had their collar surveys validated?

And is there any unusual deviation

with the diamond drill holes?

A visual check can often pick this up.

For face-chip samples, are they correctly positioned

within as-builts and development?

And are there any face-chips at unexpected angles

to development?

So the common errors and warnings within Leapfrog

are shown here.

Some examples include duplicate hole ID errors.

The hole ID is missing from the collar table

that will throw up an error,

and if column max depths are exceeded in interval tables

that will also throw up an error.

If the from depth exceeds the to depth

or there are any overlapping segments,

this will show up as an error in Leapfrog,

and Leapfrog will also identify

if there are potential wedge holes.

As well as these common errors and warnings,

Leapfrog will also pick up invalid values

in numeric columns.

And let’s have a quick look at our assay table here.

So we don’t have any errors in this dataset,

but we do have a couple of warnings.

So that’s saying there are no values

for these certain hole IDs,

and we do have some invalid value handling issues.

So we’ll take a look at gold here.

We’ve got missing intervals and missing values.

We don’t have any non-numeric values, so that’s fine,

but these will also be picked up.

So for example, a less than symbol in your database.

And we don’t have any negative values

or non-positive values here in this dataset either,

but these will be picked up too.

We can choose how we want to handle

these missing intervals and values.

Can either omit them or replace them.

In this case, I’m going to admit to them

and say that these rules have been reviewed,

and same here for the arsenic.

We’ve got missing intervals and values

so they can be omitted.

And the non-positive values, we’ve got a 0.0,

and I’ll just keep that one in the dataset.

Saving those rules will remove the red flag

from this assay table.

If any errors or inconsistencies are detected in your data,

aside from these invalid value handling rules,

then these must be fixed in the original database

and reloaded into Leapfrog.

If you just fix them within Leapfrog,

they’ll be local fixes only,

and any time the data is reloaded,

those overlapping errors or duplicate collar IDs,

they’ll come back into the project.

So best practice is to fix them at the database stage.

Once data has been checked,

it can be classified within Leapfrog

using a category selection on the collars.

For example, we can flag validated or not validated collars,

or we can set up some kind of traffic light system

for low, medium, and high confidence,

and I’ll show you how to do this in Leapfrog right now.

Bring the collars into the scene view,

and we’ll set up a new category selection on these collars.

So I happened to know that these vertical holes here

are historic holes that haven’t been validated.

So I’ll flag these ones.

So I select these collars and assign them

to my low confidence category.

We also have some holes in progress.

So I flag those as a medium confidence,

we may not have the validated collar surveys back.

So this line of holes here.

And then all of the rest of the holes have been validated.

So I’ll hide these two categories and select everything,

and assign these all to my high confidence category.

We can now really quickly say which collars are which,

and we can also write query filters

based on this confidence category

to use down at the geological modeling stage.

We can now look at interpretation of this data

in preparation for modeling.

Imported data is really in a perfect state for modeling.

So some tools within Leapfrog

can help with this interpretation

and classification of the data.

We recommend that you use some of these tools

in an appropriate manner when modeling lithology data

or modeling grade domains.

How you use these tools will be project specific

and partly depends on what type of environment

you’re modeling, and the purpose of the model.

We’ll run through a brief overview

of each of these tools listed below,

starting with the group to lithology column.

The group lithologies tool lets you define a new unit

to which existing units are added.

When you group lithologies,

the original lith column is preserved

and a new lith column is added to the interval table.

You can then select either/or of these columns

when displaying data and creating models.

The grouped lithology is particularly helpful

when historic logging codes are mixed in

with current company codes.

The group lith tool allows AC recoding

to ensure consistency.

I would just demo quickly how to do a grouped lithology.

So we’ll right click on the lithology table,

create a new column grouped lithologies.

The base column will be our lithology that we’ve imported,

and I’ll call this one Lith_Grouped.

We can manually group these codes or we can also group them.

For this example, I’m just going to auto group

based on the first letter of each code.

That’s going to pop all of those codes into these groups

on the right hand side here.

I can see all of my C codes, this is my colluvium,

the F codes are felsics,

the L codes are the regolith codes,

our M codes are mafics, and our S codes are our sediments.

So our CIF,

which is our mineralized lithology in this project.

So I’ll just call this one seds,

make the mafics green

and click okay.

We take a look at that table now in the scene view.

I can see that we’re looking at our Lith_Grouped

in the dropdown here, and we can also jump back

and look at the imported lithology.

So we’ve greatly simplified our lith codes

using that grouping,

and we can really easily see now where our seds are sitting,

they’ve been grouped together,

and we’ve made it a lot cleaner for modeling

if we want to model our base of transported

and our regolith.

The next tool we’ll have a quick look at in Leapfrog

is the interval selection tool.

This is a really powerful tool in Leapfrog.

It’s commonly used as it gives you significant control

over the intervals you’re use to build your models.

If you’re building your geological models

based on drillhole data,

we recommend always basing your geological model

on an interval selection column

rather than the original imported column.

This provides maximum flexibility

as you progress with your modeling.

I’ll create an interval selection column.

So bring our lithology back on,

right click on the lithology table,

create a new column, and choose interval selection.

There are a couple of options here.

We can choose to have a base column of none.

And this means you need to build

your interval selection column from scratch.

It does give you ultimate control

over the contents of this column,

but you might end up with a lot of flicking back and forth

and the shape list to see your raw data.

The other option is to choose

one of the existing lithologies or categories

as the base column.

So for example, we can choose our Lith_Grouped

as the base column for this interval selection,

and it will copy all of the intervals

over from the existing table

and then you can change the codes as needed for modeling.

This approach is useful

if you only need to make minor modifications

to the imported unit codes,

or for example, your modeling based on grade

using a category from numeric column.

I prefer using this option

as it requires less flicking back and forth,

and all of the intervals are already populated

ready for interpretation.

But ultimately how you set up your tables

and work flows is up to you and your company,

and it will be quite project specific.

In this case, we’ll set the Lith_Grouped as the base column

and I’ll call this Lith_Selection.

And this now gives us the option to split out

any of our grouped lithologies

and assign them to a new category.

So for example, here, we can split out these seds

into different units if we want to for modeling.

The next column type that we’ll look at quickly

is the split lithologies tool.

It’s one that I don’t use very often

as it’s more limited than the interval selection.

With this split lithologies,

you can create new units from a single code

by selecting from intervals displayed in the scene.

The difference between the interval selection tool

and the split lithologies tool is that

with the split lithologies,

you’re limited to selecting intervals

only from a single lithology,

and I’ll show you how that works here.

So here, for example, I’ll select my CIF

as the main unit that I’m looking at,

and I’m limited to only selecting CIF interval.

So for example, if there is any mislogging

and I wanted to select an interval inside the CIF

to assign it to CIF,

I can’t do that with the split lithologies,

but I could with the interval tool.

The next kind of created column we’ll take a quick look at

within Leapfrog is the drillhole correlation.

With the drillhole correlation tool,

you can view and compare selected drillholes in a 2D view,

you can then create new interpretation tables

in which you can assign and adjust intervals

and create new intervals as needed.

Interpretation tables are like any other interval table

in a project, and can be used to create models.

You can also save and export correlation set layouts

and styles that can then be used

in other Leapfrog Geo projects.

Some useful applications of the drillhole correlation tool

include for geological modeling

of consistent strap sequences

or for comparing economic composites between holes.

I’ll set up a drillhole correlation set now.

And so I’ll pick the collars back in the scene view

that I’m interested in looking at,

and we can drag across any of the columns

that we have here in the project tree.

So for example, our Lith_Group

versus our original lithology, we can compare here,

and we can add in new interpretation tables.

So I’ll set my source column,

I can populate an empty table,

kind of like setting the base lithology to none

for an interval selection,

or I can populate all intervals to start off with

and they can then be adjusted.

So here I’ve started off with the same intervals

as the Lith_Grouped,

but now I have the option to adjust here into day

and make changes

based on other information that I might have.

I’ll save that.

And we’ve now got this new interpretation table

in the project tree that can be used to model.

The next tool we’ll take a quick look at

is the overlaid lithology tool.

So this lets you combine two columns

to create one new column.

There’s a primary column.

So this is the column that you wish to have precedence

and then a fallback column.

So data in that fallback column will only be used

when no data is available from the primary column.

An example here might be where we’ve logged oxidized

and transitional material in one column,

but we haven’t logged fresh.

So we may want to combine

that oxidized and transitional material

with our logged fresh cards.

So I’ll set that up quickly.

I’ll bring in our additional weathering column,

and we’ll set up the overlaid lithology.

So here we can set our weathering as the primary column

and our grouped lithology as the fallback column.

And then it will only populate our grouped lith

where there are no weathering codes.

And we can assume that

all of the oxidized and transitional material

will be populated from the weathered column,

and then everywhere that we have our fallback column

is fresh.

And we’ll take a look at that

on our drillhole correlation sets as well.

So you can see our combined column here.

So everywhere where we’ve got data in our weathering column,

we’ve populated as the primary column

in the overlaid lithology,

and then where we don’t have data,

we’ve got our grouped lithology.

The next type of column we’ll have a look at

is the category column from numeric data.

And this is used when you’ve got your numeric data

that you wish to turn into categories or bins

so that you can use that with the lithology

and category modeling tools within Leapfrog.

This data can then be interval selected and used as the base

for a domain model or a geological model.

An example of this is for narrow vein modeling

in gold deposits.

The grade and logging are often combined together

to determine domain boundaries that we may need to make sure

that we’re selecting based on the assay data.

I’ll quickly set up a category from numeric.

So we’ve got our raw assay data here

with a continuous color map.

To set up the category from numeric,

we’ll right click on the assay table

and create a new column category from numeric.

The base column will be gold

and we’ll call this one Au_cat.

We can now add in different categories.

So my lowest category might be a 0.05

then we’ll have a 0.5, a one,

a three, and we’d put a 10 in there as well.

We need to adjust these names

and make sure that you’ve got the symbol

the way that you want it to be.

So decide if the assay lands exactly on the number,

whether you want to end the bin above or below.

I will set the colors up here and click okay.

Now in that assay table, we can drop down to the Au_cat

rather than that raw gold data

with the continuous color map.

And we’ve binned all of our assays into these grade bin.

This can be used as the base column

for an interval selection if we want to,

and we can then select based on our Au_cat intervals.

Next up, we’ll take a look at creating new columns

in the composites folder.

So we can create category composites, majority composites,

numeric composites, or economic composites here,

and we’ll quickly run through the functionality

for each of these types.

The category comps can be used

if unit lith boundaries are poorly defined.

For example, if there are small fragments

of other lithologies within the lith of interest,

the category comp tends to smooth these out.

So we can composite the drillhole data directly here

using the category comp functionality.

This creates a new interval table

that can be used to build models,

and changes made to the table

will be reflected in all models based on that table.

So we’ll set up a new category comp from the Lith_Grouped.

Our primary unit of interest in this model is the seds.

And in this case,

we’ll ignore the colluvium, felsics, and regolith,

and we’ll just look at comping small segments of mafic

that are caught within the seds.

So this is where we set that length.

So we can filter these exterior segments,

let’s say shorter than two meters,

and that will simplify our seds for modeling

if we’re happy to include two meters of mafic

inside our seds.

So again, it really depends

on what you’re building the model for and why.

So the settings here will be dependent on that.

So dragging that into the scene view,

we can see

that we’ve ignored the segments in gray,

primary segments in red, and the exterior segments in blue.

And we can have a look using the drillhole correlation

and compare our original grouped lithology

to the category comp.

So I’ll just set up and grab a couple of colors there.

We’ll bring on our comp and we’ll bring on our Lith_Grouped.

So we can see here in hole SKA350,

in our Lith_Grouped we had a small interval of mafics

and that has now been composited into our primary lithology.

The second option for compositing these points

is rather than doing it directly on the drillhole

is to change the settings in the intrusion surfaces

and the geological model.

We have very similar settings

that can be changed at that stage,

and I prefer to use the second option

in the surfacing settings

as it keeps the underlying data as is,

but adjust the input points building that intrusion

based on the compositing parameters that you set up.

This is covered in our advanced surface editing training

and also in a previous webinar.

Using the majority composites tool,

we can comp category data into integral lengths

from another table or into fixed interval lengths.

So we’ll take a look at that.

So a new majority composite, we can set a fixed length,

or we might want to set our assay length.

We may want to comp our grouped lithology

based on our assay intervals.

This is really useful for comparing

those lith and assay intervals as the intervals,

the from’s and the to’s from the assay table

will be applied to the category column

to produce a new column with those shorter intervals.

The majority comp can then be used to create

or used in a merged table

containing both assay and lithology data.

And the key benefits of this workflow

is that the merge table will not split the assays,

though those assay intervals will still be maintained

even if the lith intervals don’t align.

So I’ll set up one of those.

And if we open up that majority comp,

we can say that for each interval,

we have a Lith_Grouped code.

The numeric comp tool will take

unevenly spaced drillhole data

and turn it into regularly spaced data,

which can then be used in an interpolation or an estimation.

The compositing parameters can be applied

to the entire drillhole, or just within a subset of codes.

For example, we might want to just compare gold values

within the CIF.

So here we can set our base column to our Lith_Grouped

and just comp the seds to let’s do a two meter interval.

There are some other options here

for residual end length and minimum coverage.

If you’d like some more information about these,

please get in touch with us,

as the minimum coverage can have quite a large impact,

particularly if you have missing intervals

and missing samples.

We’ll choose our output column there.

So we’ll choose the Au and click okay.

And we’ve got our numeric comp now in the scene view.

You can see we have only comped that data

that fell within the CIF unit.

The final type of composite we can create

in Leapfrog is an economic composite.

The economic comp classifies assay data

into either/or, or waste,

and can take into account grade thresholds,

mining dimensions, and allowable internal dilution.

The result is an interval table

with a column of all waste category data,

a column of composited interval values,

plus some additional columns showing the lengths,

linear grade, included dilution length,

included dilution grade, and the percentage of the comp

that’s based on missing interval data.

There’s a free training module on My Seequent,

which runs through this in more detail

and also a webinar on our website.

I’ll create an economic comp here

using gold values,

and we’ll set a cutoff grade of two

and on a minimum or comp length of three.

We’ll just set a really simple one now for demo purposes.

You can see in the scene view now we have our drillholes.

All of the intervals have either been classified

as ore in red or waste in blue.

It can be quite handy to use the drillhole correlation

to compare how we’ve comped across different drillholes

on the same line.

So I’ll set up a new drillhole sed quickly.

So I’ll compare a few holes here.

Can bring across the status and the grade.

We can see across those holes,

we’ve actually only got ore based on those parameters

in every second hole here.

Another way to create new columns in Leapfrog

is by using the calculations on our drillhole data.

So these calculations might be metal equivalencies

or elemental ratios that can then be used

in downstream modeling or interpretation.

The calculations can be found by right clicking

on any of the drillhole tables and going to calculations.

We can then set up a variable, a numeric calculation,

or a category calculation using any of these existing items

and the syntax and functions

found here on the right hand side.

And finally, the last way to create new columns

within Leapfrog is by back flagging drillhole data

with an existing model.

So this will flag our domain model

back against our drillholes,

and create new tables and columns within Leapfrog.

Once we’ve set up all of these columns and tables

that we need for our interpretation,

sometimes maybe we might want to merge that data together.

That’s where the merged table functionality comes in.

The merge table functionality simply brings together columns

from multiple tables into a single table,

and there are many applications for this within Leapfrog,

including for these two reasons here.

So one would be to allow

for more flexible interval selection.

And the other reason here that we’ve got

is to combine numeric and categorical data

into a single table

for the purpose of domain-specific statistical analysis

of that data.

The merge table preserves all intervals

from all tables being merged,

which depending on the logging practices

that you’ve got on your site

may result in assay intervals being split.

To preserve the assay intervals and create a merge table

containing assay and lithology data

without splitting those intervals,

start by creating a majority comp table first,

and then using that majority comp

to merge in with your assays.

An important note here is that

if you build an interval selection column

from a merge table, preserving the from and to intervals

in that merge table is very important.

If the from or to values in the merge table get adjusted,

the interval selection coding for those rows

will be removed and you’ll lose those selection codes

in the interval selection.

Merge tables are dynamically linked

back to their source tables.

So a change in the interval length in the source table.

So for example, if the hole gets relogging

and then reloaded with different from’s and to’s,

this would lead to a change of interval length

in the merge table,

and result in the loss of some of your interval selections.

So be careful with this.

We have some really great charts

and workflow documents here on merge tables.

If you would like a copy of these,

please get in touch with us after the webinar

and we can send them through.

So this one here, it’s just a brief workflow chart

on merge tables asking that why

and when you should merge tables,

depending on what you’re planning to use them for.

And there’s some information in here

about whether you should build the merge table

before performing your interval selection and modeling

or afterwards

depending on what workflow you’re trying to do.

And we’ve got a document for workflow 1 as well

that runs through creating that majority comp first

and then merging that with your assays

so that you keep those assay interval lengths.

So, yeah, please get in touch

if you’d like a copy of these and we’ll send them out.

And finally, once we’ve validated and interpreted our data,

merge tables, if we need to set up interval selections,

we can choose a base lithology column

for our geological model or domain model.

An important thing to know is that

once the base lith column has been set,

it cannot be changed, so choose carefully.

Any of the column types that we’ve discussed so far

or any of those category or lithology column types

can be used as the base column.

So this might be an interval selection

on a grouped lithology or an interval selection

on a category from numeric column,

or perhaps even the status column from an economic comp.

They can all be used as the base column

for your geology model.

The other option we have is to have a base column of none.

I’ll just bring up the new geological model window.

So here we can have a base column of none if we want to.

So when you select this,

the point is to not link the model to any source column.

So this means you’re free to remove source columns

if needed, and you can model

from multiple different data sources.

This can be a good option if you’re creating a model

from information other than drillhole data.

So for example, GIS lines, polylines,

existing surfaces or structural data,

then you could select none.

A query filter can also be applied at this stage

when setting up the geology model.

So for example, we might wish to filter out those holes

that we classified earlier as low confidence.

So, I’ll just quickly write the query filter for that.

So I’ve build my query, and say that I want my confidence

to only be high or medium.

I’ll set my base lithology column to my Lith_Selection

so that I’ve got maximum flexibility,

and I’ll apply my query filter.

Now we’re ready to go and start modeling.

Thank you for attending this webinar today.

We hope that this has been a helpful guide

to understanding drillhole data in Leapfrog,

and assists with importing, validating,

and interpreting data to get ready for modeling.

As always, if you need any help,

please get in touch with your Seequent support team

at [email protected]

And we have great resources on our website,

in the My Seequent Learning,

and we run training and project assistance.

So again, please contact your local team

if you need any help with training and project assistance.

Thank you.