Skip to main content
search

Lyceum 2021 | Together Towards Tomorrow

This session will showcase an implementation of MX deposit, Imago, Leapfrog Geo and Central on a 50,000m historic drilling data set, implementing data capture and visualisation for ongoing exploration.

Overview

Speakers

James Rogers
President & CEO, Longford Exploration Services

Anne Belanger
Project Geologist – Seequent

Duration

30 min

See more Lyceum content

Lyceum 2021

Find out more about Seequent's mining solution

Learn more

Video transcripts

[00:00:00.726]
(light music)

[00:00:10.620]
<v ->Hi and welcome everyone to Lyceum 2021</v>

[00:00:15.900]
for our North American region.

[00:00:18.310]
My name is Anne Belanger.

[00:00:19.260]
I’m a Project Geologist with Seequent in Vancouver,

[00:00:22.220]
and I’m pleased today to introduce James Rogers,

[00:00:25.130]
who’s CEO and President of Longford Exploration Services.

[00:00:29.350]
James is an exploration geologist

[00:00:31.370]
who’s been working in our exploration sector since 2007

[00:00:36.400]
and he’s worked in areas all over the world.

[00:00:39.090]
Today he’s going to be talking about,

[00:00:40.767]
“The implementation of streamlined data collection

[00:00:43.720]
and viewing in early exploration projects.”

[00:00:47.270]
James, would you like to start?

[00:00:49.480]
<v ->Perfect, Anne thanks so much for the opportunity</v>

[00:00:52.600]
to speak at Lyceum.

[00:00:54.620]
It’s been an excellent adventure

[00:00:57.220]
kind of getting started using the Leapfrog products

[00:01:01.380]
along with all the supplementary software solutions

[00:01:04.670]
over the last little while

[00:01:05.830]
so today I’m hoping to talk a little bit about

[00:01:08.260]
what that experience was like for us to onboard

[00:01:11.240]
a couple of our projects

[00:01:12.500]
as an early stage exploration company

[00:01:14.820]
and dive into a little bit about

[00:01:17.030]
how the resulting system has worked for us,

[00:01:20.390]
its benefits and identify some of those challenges

[00:01:22.800]
that we as early stage explorers often face

[00:01:25.210]
and how we’re able to implement these solutions for us.

[00:01:30.270]
A little bit about me, thank you for the introduction.

[00:01:32.960]
So I do run an exploration services company

[00:01:35.590]
called Longford Exploration.

[00:01:36.870]
It’s grassroots-focused.

[00:01:39.040]
We work around the world managing drill programs.

[00:01:41.610]
Our specialty is kind of being those first boots

[00:01:43.690]
on the ground, taking a project through its early stages

[00:01:47.560]
of exploration and drilling

[00:01:49.600]
and all the way up to that economic phase

[00:01:53.790]
where we pass things off to groups with more engineering

[00:01:57.790]
and financial experience than us.

[00:02:00.610]
My background as an explorationist

[00:02:03.800]
has been all across Africa, South America.

[00:02:07.030]
I’ve managed and run projects all around the world

[00:02:10.040]
and I think we’re up to 30, 40 countries

[00:02:12.160]
here now that we’ve worked in

[00:02:13.240]
so for a short resume of time

[00:02:18.010]
we’ve accomplished a lot of things

[00:02:19.510]
and we’ve certainly managed to take a look at

[00:02:21.830]
a lot of projects and a lot of different data styles.

[00:02:25.040]
And so one of the challenges that we’ve always had

[00:02:28.060]
as we onboard either a new client

[00:02:29.810]
or one of our own projects

[00:02:30.880]
as we continue to pull things together

[00:02:33.780]
is really coming up with data collection solutions

[00:02:36.300]
that are quick to implement,

[00:02:39.210]
that are able to accurately and bring together data

[00:02:43.690]
across all historic work,

[00:02:46.530]
all of our ongoing work

[00:02:48.150]
and how do we bring everything together

[00:02:49.890]
across an entire company solution

[00:02:51.680]
so that our geologists are quickly trained,

[00:02:54.470]
all of our data collection is done and streamlined

[00:02:58.250]
and we can have oversight and accountability throughout it.

[00:03:01.410]
So really what we are looking for

[00:03:02.930]
and have looked for in a solution

[00:03:04.760]
always comes down to money first,

[00:03:06.680]
making sure that it’s a cost-effective solution

[00:03:08.950]
for what we’re looking for.

[00:03:10.460]
We want accountability so we know who’s entered that data,

[00:03:13.530]
who’s the geologist that’s logged the core,

[00:03:15.330]
who’s the geo-tech that’s shipped those samples off

[00:03:18.010]
and how are we able to work through all of our errors?

[00:03:22.060]
Oftentimes we’re working with data

[00:03:24.820]
that has multi-generational legacy to it.

[00:03:28.920]
Sometimes it’s paper drill logs back from the early 1960s

[00:03:33.050]
or even sometimes further back

[00:03:34.980]
and we need to be able to pull a lot of that historic data

[00:03:37.300]
into some form of usable database

[00:03:40.549]
that we can both keep continuity on it,

[00:03:43.500]
but we can also keep visualization

[00:03:46.320]
and be able to move the model

[00:03:47.770]
and our project understanding forward.

[00:03:49.760]
We need things that are quickly and easily customizable

[00:03:52.460]
in the field,

[00:03:53.430]
it’s very important for exploration

[00:03:56.360]
and early stage exploration.

[00:03:57.980]
We don’t have robust infrastructure at site often,

[00:04:01.080]
we need to be able to quickly change and move our pivot

[00:04:06.410]
or our pick tables for how we’re logging

[00:04:08.670]
and collecting data and we need to make it efficient.

[00:04:12.320]
We’re working long shifts collecting this data

[00:04:14.557]
and the last thing we want to be doing is being frustrated

[00:04:17.640]
at how we’re collecting it.

[00:04:19.450]
So what was interesting for us

[00:04:21.650]
as we went through a number of different solutions

[00:04:24.420]
and we’ve tested a number of different collection softwares

[00:04:26.600]
and management softwares,

[00:04:29.887]
and really implementation and bringing things together

[00:04:33.290]
really fast and coming up with a robust solution

[00:04:35.560]
where all the pieces were talking to each other,

[00:04:37.720]
it took about six years of trial and error.

[00:04:40.510]
Earlier this year we onboarded a historic data set

[00:04:43.080]
of about 50,000 meters of drilling

[00:04:46.820]
across 172 drill holes going back to the 70s in this case,

[00:04:53.210]
early 90s for the bulk of it,

[00:04:55.570]
data was done and collected by various geologists

[00:04:58.140]
and really sat around

[00:04:59.130]
and hadn’t been pulled together and compiled.

[00:05:02.420]
So this was the ultimate test scenario for us

[00:05:04.500]
to kind of pull things together.

[00:05:06.100]
So what we care about is how are we going to approach

[00:05:09.440]
what I’m going to talk about today is,

[00:05:10.870]
how do we look at the various elements of data

[00:05:13.790]
that we need to collect

[00:05:14.880]
and how are they going to all play together?

[00:05:16.930]
So the first point is historic data,

[00:05:18.827]
and that’s also old logs, old assay certificates,

[00:05:22.460]
all of that information.

[00:05:24.210]
The next part is the field data for our ongoing projects,

[00:05:27.200]
that’s going to be the collar data

[00:05:28.610]
of drilling point data for soils,

[00:05:31.640]
all of the information about the field sites themselves,

[00:05:35.800]
what kind of condition that drill sites are in.

[00:05:37.620]
And then of course we’re going to bring our data collection

[00:05:40.790]
into the core shack.

[00:05:41.670]
So that’s going to be how are we logging it,

[00:05:43.520]
how we’re imaging it, how we can make this more efficient,

[00:05:46.020]
and when we’re reprocessing processing a lot of drill core

[00:05:48.980]
it certainly takes a lot of time

[00:05:50.380]
so the more efficiencies we can get in this

[00:05:52.570]
the better we’re going to be able to move the data along.

[00:05:56.120]
After we get that data collected

[00:05:58.130]
and we of course want to have continuous stream

[00:06:00.290]
of that data base itself evolving with both historic

[00:06:04.130]
and new data that’s coming in,

[00:06:05.980]
and we want to be able to push it out

[00:06:07.500]
into the usable products,

[00:06:09.000]
both for making decisions on the fly

[00:06:11.380]
like visualizing where our drill hole progress is

[00:06:13.820]
if we’re hitting our targets,

[00:06:15.130]
and then moving it along into modeling

[00:06:17.840]
and actually producing some final products on it.

[00:06:21.620]
So starting first with the historic data itself.

[00:06:25.750]
This is often a challenge for many projects

[00:06:28.740]
that we first join and come into.

[00:06:32.410]
You know we’re dealing with complicated access databases,

[00:06:35.114]
multiple geologists of different exploration campaigns

[00:06:40.370]
having different logging methods,

[00:06:42.810]
maybe different logging codes.

[00:06:44.380]
And there’s no way around it,

[00:06:45.470]
we always end up having to do a lot of work in synthesizing

[00:06:49.350]
and cleaning up this data.

[00:06:51.480]
What really helped us in our final solution

[00:06:54.690]
that we have here

[00:06:55.523]
and what’s been our final solution

[00:06:57.000]
for our current data collection is MX Deposit.

[00:07:01.610]
So why that works well for us

[00:07:04.260]
is we’re going to do a lot of legwork in Excel,

[00:07:07.470]
we’re going to pull a lot of the logs together,

[00:07:09.960]
a lot of that information down

[00:07:11.990]
but then we’re going to be importing

[00:07:13.040]
all of that historic data into MX.

[00:07:15.360]
It’s going to identify the holes for us.

[00:07:17.280]
It’s going to show us where our errors and our overlaps

[00:07:19.980]
and where we have challenges

[00:07:21.250]
but it’s going to give us the opportunity

[00:07:22.950]
which is probably one of its most important features

[00:07:27.380]
for dealing with historic data,

[00:07:28.590]
is it’s going to allow us to pull information

[00:07:30.650]
directly from original assay certificates

[00:07:34.600]
and it’s going to give us a chain of data evolution

[00:07:38.640]
so that we know that we’re working with the most up-to-date

[00:07:41.790]
and the most correct data that we can.

[00:07:45.010]
That means that our QAQC,

[00:07:46.660]
if we’re onboarding a historic project,

[00:07:48.560]
we’re actually going to be able to pull out and plot out

[00:07:51.300]
how our QAQC standards and blanks were all the way back

[00:07:56.470]
and not just be working with the resulting reports

[00:07:58.820]
from the previous exploration campaigns.

[00:08:03.240]
We’ll dive a little bit more into this as we go.

[00:08:06.090]
This is an example of a historic data set

[00:08:09.020]
that is pretty typical.

[00:08:10.710]
This is in good shape,

[00:08:11.690]
we’ve got a lot of different categories

[00:08:13.410]
that have been defined,

[00:08:16.410]
a lot of different attributes

[00:08:17.610]
that have been pulled together

[00:08:18.970]
and this is typically what we’re going to be wanting to build

[00:08:21.780]
as we get ready to bring that data into MX Deposit

[00:08:26.820]
for the import itself.

[00:08:28.850]
Now when we bring in data into MX Deposit,

[00:08:33.120]
it is incredibly fast for the amount of data

[00:08:38.290]
that we can bring in.

[00:08:39.820]
It’s going to give us some errors as we go often,

[00:08:42.570]
as we bring in data,

[00:08:44.510]
especially when it gets back to older things,

[00:08:46.330]
it’s going to have a lot of missing intervals

[00:08:48.140]
or perhaps missed keystrokes for sample IDs

[00:08:52.520]
and this gives us a great opportunity to be able to flag

[00:08:55.190]
and like identify where these challenges are

[00:08:57.300]
so we can get back in and reference

[00:08:59.250]
and we can actually remove or flag or add comment fields

[00:09:02.730]
on to the data attributes themselves

[00:09:06.030]
so that we know and we can rank

[00:09:07.460]
sort of what that quality of data is,

[00:09:11.110]
if we’re just going to be quantitative at best

[00:09:12.930]
or if we’re actually going to be able

[00:09:14.040]
to keep a qualitative data chain on any historic information

[00:09:18.520]
as we bring it in.

[00:09:22.060]
I’ll jump back in a little bit more on historic data

[00:09:25.450]
as we kind of summarize at the end

[00:09:27.550]
of some of the other advantages of MX

[00:09:29.500]
and how the solution is working for us.

[00:09:32.560]
But another important part that I want to talk about is

[00:09:35.130]
the second part of our data collection,

[00:09:37.502]
and that’s going to be at the drill.

[00:09:39.410]
What we need to do

[00:09:40.420]
and what we need to consider in a situation

[00:09:42.870]
where we need to collect data at an actual drill site itself

[00:09:45.830]
is that we need to be able to function offline

[00:09:47.880]
without going over top of other files

[00:09:51.620]
or other editing that may be going on.

[00:09:53.440]
So MX gives us an opportunity to really pull out

[00:09:56.320]
and check out a drill hole itself.

[00:09:57.830]
We can go and make observations in the field

[00:09:59.700]
on a handheld tablet, entering GPS coordinates,

[00:10:02.260]
we can actually populate some coordinates right into it

[00:10:04.810]
which is excellent,

[00:10:06.010]
pop a picture in, whatever we might need to do

[00:10:08.630]
that gives us the opportunity to really bring that data out

[00:10:12.740]
to the drill rig itself

[00:10:14.030]
or that data collector out to the drill rig itself.

[00:10:17.240]
We’ve actually successfully implemented

[00:10:19.250]
quick logging at a hole as well.

[00:10:21.107]
Check a hole out where it’s important for deep holes

[00:10:24.500]
when we’re shutting things down,

[00:10:25.450]
it gives us an opportunity

[00:10:26.960]
to make sure that we’ve got the surveys,

[00:10:28.880]
downhole surveys, all populated,

[00:10:31.420]
and we’re keeping track of all that data coming in.

[00:10:33.870]
So our important parts for collecting data on this

[00:10:35.617]
is it has to be field portable, it has to work offline,

[00:10:38.980]
it has to sync when you bring it back in into the core shack

[00:10:44.390]
or back into the field office

[00:10:46.140]
and allow us to continue to move the project along.

[00:10:50.950]
One of the great things with the way

[00:10:53.300]
that we can work with MX is everything is very customizable.

[00:10:56.890]
Everything’s very easy to make.

[00:10:59.040]
We can start with the base template,

[00:11:01.090]
but we can now start collecting attributes

[00:11:03.910]
that we may only identify as being important

[00:11:07.580]
once we’re in the field,

[00:11:08.590]
that is maybe it’s whether casing is left in or not,

[00:11:11.160]
or whether casing is kept.

[00:11:13.800]
We are able to really pull together

[00:11:15.750]
multi-generations of coordinates.

[00:11:18.510]
We can have NAD 27 and then bring in NAD 83

[00:11:23.027]
and we can rank what those coordinates are going to be

[00:11:25.263]
that we want to use in our final project

[00:11:27.470]
or even the local grid.

[00:11:28.990]
So we’re yet start to be able to collect

[00:11:30.480]
a pretty robust amount of data.

[00:11:31.900]
A lot of the other solutions we looked at

[00:11:33.600]
required 24, 48 hours worth of waiting time

[00:11:37.500]
while we bring that project

[00:11:39.370]
or that’s the problem to the actual coders

[00:11:42.570]
behind the software itself

[00:11:44.300]
and then they’re going to be able to build these fields for us.

[00:11:47.210]
In the case of MX almost everything we can build ourselves,

[00:11:50.710]
all the attributes and all the pick lists

[00:11:52.970]
and that’s pretty important for a junior exploration company

[00:11:56.490]
that’s nimble and has multiple projects,

[00:11:59.110]
we really want to focus on data collection

[00:12:01.690]
not on coding software on how to collect the data.

[00:12:05.910]
A little bit more

[00:12:06.870]
as we continue to look at the features of MX

[00:12:09.900]
and I think this is an area

[00:12:10.900]
that we’re going to continue to see advance

[00:12:13.290]
within this solution itself

[00:12:15.620]
but we’re able to start seeing

[00:12:17.180]
and tracking how many sample dispatches,

[00:12:19.540]
what our turnaround times are,

[00:12:21.270]
how many samples we have in the full database,

[00:12:23.690]
which ones are being accepted, which ones are not,

[00:12:27.180]
how many drilled holes

[00:12:28.570]
and really get a good picture and summary

[00:12:30.670]
of what’s in progress in the logging process

[00:12:32.940]
and what’s been completed, diversion tracking,

[00:12:36.060]
and being able to see who’s done what inside of a project

[00:12:40.000]
is pretty important for us as well

[00:12:41.570]
and that introduces that accountability

[00:12:43.920]
I spoke about earlier which is so important for us.

[00:12:47.080]
I think as part of this solution,

[00:12:49.730]
really the most important aspect

[00:12:53.300]
is being able to efficiently

[00:12:55.460]
and quickly collect accurate data in the core shack as well.

[00:13:01.530]
We’ve ran into situations in the past with other software

[00:13:06.330]
and other solutions where we maybe come across

[00:13:08.330]
a new mineral assemblage

[00:13:09.690]
or a new lithology that we weren’t expecting

[00:13:12.420]
and you have to remember we’re not always drilling

[00:13:14.170]
something that we have a great understanding on.

[00:13:18.000]
And then, has again presented a challenge for us

[00:13:20.280]
to be able to populate a pick list

[00:13:22.970]
and give us that new lithology

[00:13:24.410]
so the geologist that’s logging the core

[00:13:26.380]
is oftentimes just entering that as a comment field

[00:13:29.220]
and we rush through the job, we finish things up

[00:13:31.250]
and now we’re dealing with this

[00:13:34.730]
after we’ve already collected the data

[00:13:36.367]
and we no longer have the rocks to look at

[00:13:38.160]
right in front of us.

[00:13:38.993]
And in MX, we just pop it right into a new pick list,

[00:13:42.460]
publish it, it’s up and populated in minutes,

[00:13:46.630]
and it gives us the opportunity to restrict that editing.

[00:13:49.030]
So if we’re worried maybe we have a junior geologist

[00:13:52.330]
or a younger geologist

[00:13:53.230]
that hasn’t had a lot of experience on these rocks,

[00:13:56.400]
often the tendency is to collect too much data

[00:13:58.760]
or to break out too many lithologies

[00:14:02.210]
which may not be important.

[00:14:03.390]
So we’re able to kind of control how,

[00:14:05.857]
and that forces our geologists and our teams

[00:14:08.090]
to have a discussion about

[00:14:09.110]
whether we should be breaking something out

[00:14:10.590]
as a new lithology or otherwise.

[00:14:13.710]
Some of the most time-consuming stuff in the core shack

[00:14:16.870]
has always been recording recovery, RQD,

[00:14:19.420]
the box inventories, magnetic susceptibility,

[00:14:22.360]
specific gravities.

[00:14:23.490]
So using tablets and using these systems

[00:14:26.050]
we’re able to automatically calculate specific gravity

[00:14:30.030]
just from punching in measurements

[00:14:32.040]
directly from a tablet as we’re there.

[00:14:34.130]
That gives us two very important things.

[00:14:36.300]
One, it gives us a chance to make sure it makes sense

[00:14:38.300]
and it’s real data,

[00:14:39.880]
that we’re not making a data entry error,

[00:14:42.740]
that the SG is lining up within some realms of possibility

[00:14:46.530]
that makes sense.

[00:14:47.600]
And it gives us that immediate database population,

[00:14:51.300]
we’re not taking a clipboard back to enter data in.

[00:14:55.910]
So MX Deposit is probably one of our biggest tools

[00:14:58.760]
for collecting all of these things,

[00:15:00.720]
all this data inside of the core shack itself as well

[00:15:03.340]
right up to the sample dispatch.

[00:15:05.960]
Its integration with multi-platforms

[00:15:12.060]
whether it’s being on an Android tablet or on a PC

[00:15:15.580]
is certainly useful for us as well.

[00:15:17.580]
We’ve actually are able to collect or view data

[00:15:19.940]
on a cell phone, to a wireless, like a weatherproof tablet,

[00:15:24.670]
right up to that PC or desktop on the core.

[00:15:27.613]
That gives us a lot of flexibility

[00:15:28.960]
when we’re collecting data.

[00:15:31.050]
Our geotechs are saving a lot less time during data entry

[00:15:34.390]
which means we’re able to process more core

[00:15:36.180]
and they’re also less fatigued with manual data

[00:15:39.750]
being punched in at the end of the day

[00:15:41.280]
from a spreadsheet or into a spreadsheet from clipboards

[00:15:45.340]
or other means of recording.

[00:15:50.200]
As I mentioned, we’re very much able to edit and manage

[00:15:55.520]
our pick list here

[00:15:56.470]
so if you come across a new alteration

[00:15:58.880]
we’re easily able to describe it a new code and description,

[00:16:02.140]
the same with lithologies.

[00:16:03.650]
So these are customizable

[00:16:05.060]
and very easily edited in the field.

[00:16:08.650]
This is just an example of one of the projects

[00:16:11.190]
that we’ve been working on here.

[00:16:12.270]
We’re even able to color code.

[00:16:14.010]
There’s a little bit extra work working with the team.

[00:16:17.820]
We are able to color code these

[00:16:19.130]
so they’re accurately representing our colors in the model,

[00:16:22.220]
which is again,

[00:16:23.380]
just making that geologist’s and logger’s life

[00:16:25.880]
that much easier.

[00:16:29.130]
One of the next pieces of software that we really bring

[00:16:32.910]
to our solution that we’re currently employing here now

[00:16:35.870]
is Imago and this is an interesting solution

[00:16:38.560]
and we’d been looking at this for a couple of years

[00:16:41.370]
and we really started to deploy it earlier this year

[00:16:44.550]
on about a 10,000 a year drill program.

[00:16:47.100]
At first we were pretty reluctant

[00:16:48.540]
with how clunky a lot of the data collection had been

[00:16:51.680]
or the imaging collection and the viewers had been

[00:16:54.430]
but we’re pretty impressed with Imago.

[00:16:55.890]
It is web-based so you need an internet connection

[00:16:57.820]
to really upload and utilize it.

[00:17:00.150]
That being said, it is built in such a way

[00:17:03.620]
that it works extremely well

[00:17:05.010]
even on low bandwidth internet connections

[00:17:07.930]
which is excellent and incremental uploads

[00:17:10.380]
really helped us out with that.

[00:17:11.770]
So what we’re looking for

[00:17:12.720]
is to be able to really get good imagery

[00:17:15.430]
and be able to use it.

[00:17:16.630]
It’s one thing to have a file structure

[00:17:19.130]
that’s got 30,000 images of core in it

[00:17:22.040]
but that’s not really going to give us a lot of data.

[00:17:24.770]
So we really were wanting to leverage something that could

[00:17:28.710]
allow us to view our results,

[00:17:30.100]
kind of review what that core was looking like

[00:17:32.620]
and revisit our logs

[00:17:33.453]
without having to get back out to the core stacks,

[00:17:37.690]
especially when we’re four months waiting for results

[00:17:40.630]
as a lot of us were in 2020 and early 2021 here so far.

[00:17:47.110]
And Imago does integrate seamlessly with MX

[00:17:49.460]
and it’s a pretty impressive thing

[00:17:51.840]
so we’re able to go right from MX,

[00:17:53.480]
find a box that might have an issue,

[00:17:55.410]
or maybe a data overlap, go click on it, take a look at it.

[00:17:59.677]
We are building our own imaging stations

[00:18:02.610]
but they can be a little bit more robust

[00:18:04.340]
or as easy as we want to really capture this.

[00:18:07.140]
We are using some pretty,

[00:18:09.710]
I guess some pretty high tech camera equipment

[00:18:12.420]
with the tablet that’s all geared up.

[00:18:14.530]
On the left you can see sort of an instance

[00:18:17.180]
where we have a cart that we roll around from rack to rack.

[00:18:20.900]
We do dry, wet, carry on,

[00:18:22.850]
and a little bit more stationary on the right

[00:18:25.267]
and a bit more primitive of a solution

[00:18:28.290]
but we just moved the core boxes in here.

[00:18:30.770]
It’s very quick to meter it out,

[00:18:32.210]
it’s very quick to edit the photos

[00:18:34.760]
and that gives us a viewing option online

[00:18:37.120]
which has just been an incredible solution.

[00:18:39.480]
This has been great both for clients collaboration,

[00:18:43.210]
I’m able to coordinate a job or look at issues in core

[00:18:46.540]
or really identify and nail down

[00:18:48.200]
some of the things that might be going on

[00:18:49.550]
or our field team might be having troubles with

[00:18:51.420]
all the way, anywhere I am in the world

[00:18:53.440]
on a pretty limited internet connection

[00:18:56.730]
which has been excellent

[00:18:57.670]
and of course we can zoom in once we get our assays back,

[00:19:00.610]
see what’s going on.

[00:19:02.590]
We get an excellent amount of resolution,

[00:19:04.440]
that’s more of a hardware thing,

[00:19:05.820]
but the user ability of this,

[00:19:07.620]
of having that whole core shack altogether

[00:19:10.210]
is a big part of the solution

[00:19:14.960]
for how we want to treat and collect all of this data.

[00:19:19.270]
Once we get through all this MX Deposit

[00:19:21.760]
and what we’re collecting with Imago,

[00:19:23.640]
we’re wanting to get this up and visualized

[00:19:26.430]
as fast as we can.

[00:19:27.770]
So we’re dealing with a lot of drill hole data,

[00:19:29.500]
maybe our multi-surfaces, imagery, solids and meshes,

[00:19:32.440]
and we’re really looking for something

[00:19:33.600]
that wants a software solution here that’s quick,

[00:19:36.580]
that’s rapid, and we can push updates to it

[00:19:38.870]
and as we’re drilling

[00:19:39.703]
we can really get a sense of where we are

[00:19:42.570]
and what we’re seeing as fast as we can.

[00:19:44.510]
Time is everything on this,

[00:19:46.370]
the faster we can get our heads wrapped around

[00:19:49.260]
what we’re seeing,

[00:19:50.110]
the better job we’re going to be able to do for our clients

[00:19:52.410]
and be able to move our projects forward.

[00:19:54.880]
So we use a bit of a combination of Leapfrog and Central.

[00:19:58.700]
Leapfrog Geo is of course implicit modeling.

[00:20:02.460]
It gives us a lot of control over the data itself.

[00:20:05.700]
It’s very easy and simple to use.

[00:20:08.650]
Our field geologists can,

[00:20:10.280]
with very minimal knowledge about the program,

[00:20:12.030]
visualize, import and export viewers

[00:20:16.170]
and be able to really process the data

[00:20:18.010]
without even just using the modeling part proportions

[00:20:20.965]
of the software itself.

[00:20:22.610]
Of course, once we get into actual modeling

[00:20:24.660]
of solids and surfaces,

[00:20:26.810]
it’s an incredible software

[00:20:28.590]
and it’s very intuitive and easy to use.

[00:20:31.200]
It gives us the most nimble solution

[00:20:34.480]
for active exploration and visualization,

[00:20:36.980]
in fact, checking on all our historic data.

[00:20:38.983]
It’s an important part to mention

[00:20:40.800]
as well as we bring that data in,

[00:20:43.930]
we want historic data in particular,

[00:20:46.490]
we want to make sure we’re visualizing it

[00:20:48.040]
and it’s looking right and starting to make sense.

[00:20:50.050]
We’ve got drill holes pointing in the right direction,

[00:20:52.440]
and we have dips and azimuths recorded correctly.

[00:20:55.680]
We often end up having to revisit old logs

[00:20:59.930]
and make sure that we have captured things properly

[00:21:02.660]
where some projects might’ve been recorded negative dips

[00:21:06.510]
and others might have been positive dips

[00:21:09.720]
for, say, the inclination of a drill hole.

[00:21:12.690]
So that gives us a really good and quick and easy solution

[00:21:15.420]
to be able to verify that historic data

[00:21:18.270]
as we bring it all in

[00:21:19.570]
and start to build this robust database.

[00:21:22.350]
It will be the continuing factor

[00:21:24.120]
for us to be able to continue to visualize data

[00:21:26.620]
as it comes in.

[00:21:27.810]
Central gives us an opportunity of course to version track that

[00:21:31.580]
across multiple desktops

[00:21:34.430]
and then basically acts as a shared viewing platform

[00:21:38.680]
and a shared cloud-based solution for housing the projects,

[00:21:45.090]
the actual 3D projects as we move along.

[00:21:47.380]
Oftentimes we end up with multiple streams as we go,

[00:21:51.610]
multiple versions and branches of projects

[00:21:53.910]
as we go over time.

[00:21:56.110]
We might have a group modeling on resources

[00:21:58.900]
while we have another group importing historic data

[00:22:01.217]
and so keeping track of a lot of these models

[00:22:03.360]
is extremely difficult

[00:22:05.380]
and Central gives us the opportunity

[00:22:06.910]
to really work on that.

[00:22:10.360]
Now, of course, we’ve got all this data coming in

[00:22:12.210]
and we really want to work together

[00:22:14.240]
on how do we keep it cohesive?

[00:22:17.370]
How do we manage versions?

[00:22:19.220]
How do we keep everything moving forward

[00:22:21.550]
as we get more assays, as we get new data,

[00:22:24.340]
and how can we keep amending that database

[00:22:26.390]
with better geologic understanding

[00:22:28.010]
and more data as we collect it?

[00:22:30.890]
In our case as an organization

[00:22:33.150]
we often use cloud-based file servers like Dropbox

[00:22:37.340]
or file share and a number of other ones as well,

[00:22:40.070]
where we have our Central cloud-based

[00:22:43.250]
essentially all of our project data

[00:22:44.900]
from reports through to all of this active drilling data

[00:22:50.080]
down to whatever geochemistry interpretations,

[00:22:54.048]
other information that we have.

[00:22:56.300]
We find that it integrates seamlessly with any,

[00:23:00.050]
it could be an Amazon web-based server or Dropbox,

[00:23:02.470]
but we can integrate really quickly with their base projects

[00:23:04.693]
that we have them all locally stored.

[00:23:06.490]
All of our data is kind of backed up

[00:23:08.070]
and routinely exported outside of MX Deposit

[00:23:11.030]
into our local repositories

[00:23:13.180]
but our primary entry points for new data

[00:23:15.810]
become MX Deposit and all of the solutions like Imago

[00:23:21.630]
where we’re actually collecting the data in here.

[00:23:25.650]
This helps us keep QAQC validation all the way through.

[00:23:29.580]
We’ve got redundancy, so if we have to roll back in time

[00:23:31.997]
and take a look at things,

[00:23:33.170]
it’s very important for us

[00:23:34.370]
as part of that whole data management solution.

[00:23:37.820]
And Central, as I mentioned,

[00:23:39.250]
allows us to have multiple concurrent models

[00:23:43.160]
referencing the same data and move them lightly across

[00:23:48.080]
from whether it’s the field crew

[00:23:49.800]
to the corporate office back here in Vancouver

[00:23:51.760]
or wherever we need to visualize a project.

[00:23:55.070]
Another important part is how do we pull that data down

[00:24:00.160]
once we do have it collected

[00:24:01.570]
and MX Deposit gives us the ability

[00:24:03.630]
to generate a form of a drill log,

[00:24:06.570]
preserving all that header information

[00:24:08.470]
as well as the detailed logging descriptions.

[00:24:10.680]
Now this is important for us

[00:24:12.070]
because a lot of the softwares that we’ve used in the past

[00:24:15.280]
don’t readily do this

[00:24:16.640]
and it’s an important part of filing assessment work

[00:24:20.220]
on claims as well as reporting

[00:24:23.600]
as when we’re preparing assessment reports and final reports

[00:24:26.790]
from the work programs that we’re doing.

[00:24:28.300]
We need logs in this type of format,

[00:24:30.520]
tabulated data doesn’t work

[00:24:32.960]
for a lot of the government organizations

[00:24:35.760]
and for reporting, of course.

[00:24:40.400]
We’re also able to manage a lot of the QAQC.

[00:24:42.950]
As I mentioned earlier in the presentation

[00:24:45.050]
we can really reference original

[00:24:47.640]
and historic standards and CRMs

[00:24:49.640]
but we’re able to really bring in all of the parameters

[00:24:52.520]
of passing and failing for standards,

[00:24:56.360]
how we want to treat core duplicates

[00:24:58.180]
whether they’re solid cores.

[00:24:59.290]
We can rank how things look

[00:25:02.720]
in terms of what is the better assays, for example,

[00:25:05.870]
like a fire essay or for us a digestion,

[00:25:09.360]
we can really drill down

[00:25:10.510]
and start to populate our final data.

[00:25:12.870]
Now, when we’re dealing with the CRMs,

[00:25:15.370]
we can really quickly visualize pass and fail thresholds

[00:25:18.330]
and bring together a lot of quick and fast decisions

[00:25:23.020]
on rerunning things.

[00:25:24.190]
And as a project evolves and as we get more data,

[00:25:26.410]
we can start to update these parameters

[00:25:28.850]
for passing and failing, for standard deviations,

[00:25:31.320]
just based on our populations

[00:25:33.100]
that we’re getting from the lab themselves.

[00:25:35.800]
It’s extremely seamless to export from MX Deposit

[00:25:40.180]
as a table form.

[00:25:41.430]
We basically predefine an export

[00:25:44.460]
so that we’re essentially clicking one button

[00:25:46.450]
that is exporting our collar, our assays,

[00:25:49.171]
our various tables that we need to populate,

[00:25:54.383]
geomodel and continue to push that up into the cloud

[00:25:58.110]
so that we can use that across platform

[00:26:01.770]
or across different offices as I was mentioning.

[00:26:04.600]
So I think one area for improvement

[00:26:07.080]
is going to be having integration across these.

[00:26:09.370]
We want to see where right from MX

[00:26:11.510]
it’s feeding into the implicit modeling itself

[00:26:14.150]
as we’re collecting the data, flagging additional errors,

[00:26:17.650]
and being able to just that much more rapidly

[00:26:20.850]
update a model.

[00:26:21.800]
But certainly having that version of after a certain hole

[00:26:25.140]
or a certain date,

[00:26:26.320]
we’re able to capture data with a quick export

[00:26:29.170]
and really continue to efficiently collect

[00:26:33.080]
and visualize this data.

[00:26:40.070]
We’re also able to

[00:26:41.080]
within Central Manage our Licenses and Connectors,

[00:26:43.900]
this gives us a really interesting opportunity

[00:26:46.190]
to work with our clients.

[00:26:47.880]
Oftentimes they’ll have their own geologic teams

[00:26:50.130]
or just the executive team itself,

[00:26:53.110]
which you may want to have input on

[00:26:54.640]
how a project is progressing and how things are going

[00:26:57.240]
or just visualizing results as we get them.

[00:26:59.790]
Central allows us to quickly bring on other users

[00:27:02.950]
and we’re able to visualize and share things

[00:27:05.280]
in a very light branch of a model.

[00:27:07.200]
So we can actually share on a web-based viewer

[00:27:11.010]
to our clients or executive teams,

[00:27:14.320]
this is where we’re at, this is how this drill hole’s going,

[00:27:17.020]
we can add a comment, “We need to rerun some assay’s here

[00:27:21.700]
or let’s revisit these logs.

[00:27:23.440]
Lithology doesn’t seem to make sense

[00:27:25.150]
or maybe there’s a different interpretation than this

[00:27:26.960]
to go on.”

[00:27:27.980]
Central has been extremely useful for remote management

[00:27:30.510]
especially during COVID

[00:27:32.300]
where we don’t necessarily have

[00:27:34.140]
as many people on a site as we can

[00:27:36.340]
and travel has been more limited

[00:27:37.790]
but this has given us an opportunity

[00:27:40.050]
where I can view a project

[00:27:41.530]
and our field geologists can identify issues.

[00:27:44.540]
We can quickly click on a comment,

[00:27:46.850]
zoom right into an issue or problem,

[00:27:50.040]
a question or a point of just discussion that we might need.

[00:27:53.930]
It’s been well received for sure.

[00:27:55.800]
So in summary, looking at having a full solution

[00:28:00.880]
for early stage projects,

[00:28:02.500]
bringing in that historic data,

[00:28:05.180]
really we want to care about how easy and accurately

[00:28:10.240]
we can bring in historic data.

[00:28:12.730]
MX Deposit along with Excel and other access databases

[00:28:16.070]
has allowed us to accomplish that error checking

[00:28:19.670]
and most importantly referencing

[00:28:21.270]
original analytical software certificates

[00:28:24.660]
helps us reduce error and also increase the QAQC accuracy.

[00:28:29.800]
At the drill itself where we’re collecting

[00:28:37.030]
data right from the header itself

[00:28:40.270]
and Imago using that to collect imagery

[00:28:43.140]
that we can use to quickly view.

[00:28:45.620]
In the core shack itself

[00:28:46.750]
MX Deposit is logging directly into that.

[00:28:48.740]
Imago to capture and catalog all these photos

[00:28:52.080]
and then at the desktop using Central and Leapfrog

[00:28:55.150]
to both visualize view and continue to model

[00:28:57.840]
and build that understanding as we continue to go.

[00:29:00.870]
And then data management helps us

[00:29:03.380]
or the data management solutions such as Central,

[00:29:06.010]
will help us continue to track models as we go along.

[00:29:10.130]
I think that kind of concludes how we’ve implemented

[00:29:14.860]
these softwares to come up with a solution

[00:29:16.870]
for both historic data and ongoing data collection

[00:29:21.770]
and I’m appreciative of the time

[00:29:23.530]
to be able to tell you a little bit about

[00:29:25.260]
how we are implementing that in the industry.

[00:29:31.880]
<v ->Okay, great James, thanks so much.</v>

[00:29:34.050]
It was great to see

[00:29:35.310]
everything that Longford has been up to

[00:29:36.960]
and sort of how those options

[00:29:39.330]
are letting people certainly work together

[00:29:41.070]
including with your clients and even colleagues at site

[00:29:43.090]
to solve those problems quickly.

[00:29:44.380]
So thanks everyone for joining us as well.

[00:29:48.148]
If you have any questions

[00:29:50.230]
feel free to leave them in the chat.

[00:29:52.340]
We’re actually going to move along

[00:29:53.510]
to the next presentation

[00:29:56.060]
but we will follow up with those questions

[00:29:57.620]
within 24 to 48 hours

[00:29:58.990]
so just feel free to type them out if you have them

[00:30:02.360]
and thanks everyone for attending.

[00:30:04.653]
(light music)