Lyceum 2021 | Together Towards Tomorrow
This session will showcase an implementation of MX deposit, Imago, Leapfrog Geo and Central on a 50,000m historic drilling data set, implementing data capture and visualisation for ongoing exploration.
President & CEO, Longford Exploration Services
Project Geologist – Seequent
Hi and welcome everyone to Lyceum 2021
for our North American region.
My name is Anne Belanger.
I’m a Project Geologist with Seequent in Vancouver,
and I’m pleased today to introduce James Rogers,
who’s CEO and President of Longford Exploration Services.
James is an exploration geologist
who’s been working in our exploration sector since 2007
and he’s worked in areas all over the world.
Today he’s going to be talking about,
“The implementation of streamlined data collection
and viewing in early exploration projects.”
James, would you like to start?
<v ->Perfect, Anne thanks so much for the opportunity</v>
to speak at Lyceum.
It’s been an excellent adventure
kind of getting started using the Leapfrog products
along with all the supplementary software solutions
over the last little while
so today I’m hoping to talk a little bit about
what that experience was like for us to onboard
a couple of our projects
as an early stage exploration company
and dive into a little bit about
how the resulting system has worked for us,
its benefits and identify some of those challenges
that we as early stage explorers often face
and how we’re able to implement these solutions for us.
A little bit about me, thank you for the introduction.
So I do run an exploration services company
called Longford Exploration.
We work around the world managing drill programs.
Our specialty is kind of being those first boots
on the ground, taking a project through its early stages
of exploration and drilling
and all the way up to that economic phase
where we pass things off to groups with more engineering
and financial experience than us.
My background as an explorationist
has been all across Africa, South America.
I’ve managed and run projects all around the world
and I think we’re up to 30, 40 countries
here now that we’ve worked in
so for a short resume of time
we’ve accomplished a lot of things
and we’ve certainly managed to take a look at
a lot of projects and a lot of different data styles.
And so one of the challenges that we’ve always had
as we onboard either a new client
or one of our own projects
as we continue to pull things together
is really coming up with data collection solutions
that are quick to implement,
that are able to accurately and bring together data
across all historic work,
all of our ongoing work
and how do we bring everything together
across an entire company solution
so that our geologists are quickly trained,
all of our data collection is done and streamlined
and we can have oversight and accountability throughout it.
So really what we are looking for
and have looked for in a solution
always comes down to money first,
making sure that it’s a cost-effective solution
for what we’re looking for.
We want accountability so we know who’s entered that data,
who’s the geologist that’s logged the core,
who’s the geo-tech that’s shipped those samples off
and how are we able to work through all of our errors?
Oftentimes we’re working with data
that has multi-generational legacy to it.
Sometimes it’s paper drill logs back from the early 1960s
or even sometimes further back
and we need to be able to pull a lot of that historic data
into some form of usable database
that we can both keep continuity on it,
but we can also keep visualization
and be able to move the model
and our project understanding forward.
We need things that are quickly and easily customizable
in the field,
it’s very important for exploration
and early stage exploration.
We don’t have robust infrastructure at site often,
we need to be able to quickly change and move our pivot
or our pick tables for how we’re logging
and collecting data and we need to make it efficient.
We’re working long shifts collecting this data
and the last thing we want to be doing is being frustrated
at how we’re collecting it.
So what was interesting for us
as we went through a number of different solutions
and we’ve tested a number of different collection softwares
and management softwares,
and really implementation and bringing things together
really fast and coming up with a robust solution
where all the pieces were talking to each other,
it took about six years of trial and error.
Earlier this year we onboarded a historic data set
of about 50,000 meters of drilling
across 172 drill holes going back to the 70s in this case,
early 90s for the bulk of it,
data was done and collected by various geologists
and really sat around
and hadn’t been pulled together and compiled.
So this was the ultimate test scenario for us
to kind of pull things together.
So what we care about is how are we going to approach
what I’m going to talk about today is,
how do we look at the various elements of data
that we need to collect
and how are they going to all play together?
So the first point is historic data,
and that’s also old logs, old assay certificates,
all of that information.
The next part is the field data for our ongoing projects,
that’s going to be the collar data
of drilling point data for soils,
all of the information about the field sites themselves,
what kind of condition that drill sites are in.
And then of course we’re going to bring our data collection
into the core shack.
So that’s going to be how are we logging it,
how we’re imaging it, how we can make this more efficient,
and when we’re reprocessing processing a lot of drill core
it certainly takes a lot of time
so the more efficiencies we can get in this
the better we’re going to be able to move the data along.
After we get that data collected
and we of course want to have continuous stream
of that data base itself evolving with both historic
and new data that’s coming in,
and we want to be able to push it out
into the usable products,
both for making decisions on the fly
like visualizing where our drill hole progress is
if we’re hitting our targets,
and then moving it along into modeling
and actually producing some final products on it.
So starting first with the historic data itself.
This is often a challenge for many projects
that we first join and come into.
You know we’re dealing with complicated access databases,
multiple geologists of different exploration campaigns
having different logging methods,
maybe different logging codes.
And there’s no way around it,
we always end up having to do a lot of work in synthesizing
and cleaning up this data.
What really helped us in our final solution
that we have here
and what’s been our final solution
for our current data collection is MX Deposit.
So why that works well for us
is we’re going to do a lot of legwork in Excel,
we’re going to pull a lot of the logs together,
a lot of that information down
but then we’re going to be importing
all of that historic data into MX.
It’s going to identify the holes for us.
It’s going to show us where our errors and our overlaps
and where we have challenges
but it’s going to give us the opportunity
which is probably one of its most important features
for dealing with historic data,
is it’s going to allow us to pull information
directly from original assay certificates
and it’s going to give us a chain of data evolution
so that we know that we’re working with the most up-to-date
and the most correct data that we can.
That means that our QAQC,
if we’re onboarding a historic project,
we’re actually going to be able to pull out and plot out
how our QAQC standards and blanks were all the way back
and not just be working with the resulting reports
from the previous exploration campaigns.
We’ll dive a little bit more into this as we go.
This is an example of a historic data set
that is pretty typical.
This is in good shape,
we’ve got a lot of different categories
that have been defined,
a lot of different attributes
that have been pulled together
and this is typically what we’re going to be wanting to build
as we get ready to bring that data into MX Deposit
for the import itself.
Now when we bring in data into MX Deposit,
it is incredibly fast for the amount of data
that we can bring in.
It’s going to give us some errors as we go often,
as we bring in data,
especially when it gets back to older things,
it’s going to have a lot of missing intervals
or perhaps missed keystrokes for sample IDs
and this gives us a great opportunity to be able to flag
and like identify where these challenges are
so we can get back in and reference
and we can actually remove or flag or add comment fields
on to the data attributes themselves
so that we know and we can rank
sort of what that quality of data is,
if we’re just going to be quantitative at best
or if we’re actually going to be able
to keep a qualitative data chain on any historic information
as we bring it in.
I’ll jump back in a little bit more on historic data
as we kind of summarize at the end
of some of the other advantages of MX
and how the solution is working for us.
But another important part that I want to talk about is
the second part of our data collection,
and that’s going to be at the drill.
What we need to do
and what we need to consider in a situation
where we need to collect data at an actual drill site itself
is that we need to be able to function offline
without going over top of other files
or other editing that may be going on.
So MX gives us an opportunity to really pull out
and check out a drill hole itself.
We can go and make observations in the field
on a handheld tablet, entering GPS coordinates,
we can actually populate some coordinates right into it
which is excellent,
pop a picture in, whatever we might need to do
that gives us the opportunity to really bring that data out
to the drill rig itself
or that data collector out to the drill rig itself.
We’ve actually successfully implemented
quick logging at a hole as well.
Check a hole out where it’s important for deep holes
when we’re shutting things down,
it gives us an opportunity
to make sure that we’ve got the surveys,
downhole surveys, all populated,
and we’re keeping track of all that data coming in.
So our important parts for collecting data on this
is it has to be field portable, it has to work offline,
it has to sync when you bring it back in into the core shack
or back into the field office
and allow us to continue to move the project along.
One of the great things with the way
that we can work with MX is everything is very customizable.
Everything’s very easy to make.
We can start with the base template,
but we can now start collecting attributes
that we may only identify as being important
once we’re in the field,
that is maybe it’s whether casing is left in or not,
or whether casing is kept.
We are able to really pull together
multi-generations of coordinates.
We can have NAD 27 and then bring in NAD 83
and we can rank what those coordinates are going to be
that we want to use in our final project
or even the local grid.
So we’re yet start to be able to collect
a pretty robust amount of data.
A lot of the other solutions we looked at
required 24, 48 hours worth of waiting time
while we bring that project
or that’s the problem to the actual coders
behind the software itself
and then they’re going to be able to build these fields for us.
In the case of MX almost everything we can build ourselves,
all the attributes and all the pick lists
and that’s pretty important for a junior exploration company
that’s nimble and has multiple projects,
we really want to focus on data collection
not on coding software on how to collect the data.
A little bit more
as we continue to look at the features of MX
and I think this is an area
that we’re going to continue to see advance
within this solution itself
but we’re able to start seeing
and tracking how many sample dispatches,
what our turnaround times are,
how many samples we have in the full database,
which ones are being accepted, which ones are not,
how many drilled holes
and really get a good picture in summary
of what’s in progress in the logging process
and what’s been completed, diversion tracking,
and being able to see who’s done what inside of a project
is pretty important for us as well
and that introduces that accountability
I spoke about earlier which is so important for us.
I think as part of this solution,
really the most important aspect
is being able to efficiently
and quickly collect accurate data in the core shack as well.
We’ve ran into situations in the past with other software
and other solutions where we maybe come across
a new mineral assemblage
or a new lithology that we weren’t expecting
and you have to remember we’re not always drilling
something that we have a great understanding on.
And then, has again presented a challenge for us
to be able to populate a pick list
and give us that new lithology
so the geologist that’s logging the core
is oftentimes just entering that as a comment field
and we rush through the job, we finish things up
and now we’re dealing with this
after we’ve already collected the data
and we no longer have the rocks to look at
right in front of us.
And in MX, we just pop it right into a new pick list,
publish it, it’s up and populated in minutes,
and it gives us the opportunity to restrict that editing.
So if we’re worried maybe we have a junior geologist
or a younger geologist
that hasn’t had a lot of experience on these rocks,
often the tendency is to collect too much data
or to break out too many lithologies
which may not be important.
So we’re able to kind of control how,
and that forces our geologists and our teams
to have a discussion about
whether we should be breaking something out
as a new lithology or otherwise.
Some of the most time-consuming stuff in the core shack
has always been recording recovery, RQD,
the box inventories, magnetic susceptibility,
So using tablets and using these systems
we’re able to automatically calculate specific gravity
just from punching in measurements
directly from a tablet as we’re there.
That gives us two very important things.
One, it gives us a chance to make sure it makes sense
and it’s real data,
that we’re not making a data entry error,
that the SG is lining up within some realms of possibility
that makes sense.
And it gives us that immediate database population,
we’re not taking a clipboard back to enter data in.
So MX Deposit is probably one of our biggest tools
for collecting all of these things,
all this data inside of the core shack itself as well
right up to the sample dispatch.
Its integration with multi-platforms
whether it’s being on an Android tablet or on a PC
is certainly useful for us as well.
We’ve actually are able to collect or view data
on a cell phone, to a wireless, like a weatherproof tablet,
right up to that PC or desktop on the core.
That gives us a lot of flexibility
when we’re collecting data.
Our geotechs are saving a lot less time during data entry
which means we’re able to process more core
and they’re also less fatigued with manual data
being punched in at the end of the day
from a spreadsheet or into a spreadsheet from clipboards
or other means of recording.
As I mentioned, we’re very much able to edit and manage
our pick list here
so if you come across a new alteration
we’re easily able to describe it a new code and description,
the same with lithologies.
So these are customizable
and very easily edited in the field.
This is just an example of one of the projects
that we’ve been working on here.
We’re even able to color code.
There’s a little bit extra work working with the team.
We are able to color code these
so they’re accurately representing our colors in the model,
which is again,
just making that geologist’s and logger’s life
that much easier.
One of the next pieces of software that we really bring
to our solution that we’re currently employing here now
is Imago and this is an interesting solution
and we’d been looking at this for a couple of years
and we really started to deploy it earlier this year
on about a 10,000 a year drill program.
At first we were pretty reluctant
with how clunky a lot of the data collection had been
or the imaging collection and the viewers had been
but we’re pretty impressed with Imago.
It is web-based so you need an internet connection
to really upload and utilize it.
That being said, it is built in such a way
that it works extremely well
even on low bandwidth internet connections
which is excellent and incremental uploads
really helped us out with that.
So what we’re looking for
is to be able to really get good imagery
and be able to use it.
It’s one thing to have a file structure
that’s got 30,000 images of core in it
but that’s not really going to give us a lot of data.
So we really were wanting to leverage something that could
allow us to view our results,
kind of review what that core was looking like
and revisit our logs
without having to get back out to the core stacks,
especially when we’re four months waiting for results
as a lot of us were in 2020 and early 2021 here so far.
And Imago does integrate seamlessly with MX
and it’s a pretty impressive thing
so we’re able to go right from MX,
find a box that might have an issue,
or maybe a data overlap, go click on it, take a look at it.
We are building our own imaging stations
but they can be a little bit more robust
or as easy as we want to really capture this.
We are using some pretty,
I guess some pretty high tech camera equipment
with the tablet that’s all geared up.
On the left you can see sort of an instance
where we have a cart that we roll around from rack to rack.
We do dry, wet, carry on,
and a little bit more stationary on the right
and a bit more primitive of a solution
but we just moved the core boxes in here.
It’s very quick to mete it out,
it’s very quick to edit the photos
and that gives us a viewing option online
which has just been an incredible solution.
This has been great both for clients collaboration,
I’m able to coordinate a job or look at issues in core
or really identify and nail down
some of the things that might be going on
or our field team might be having troubles with
all the way, anywhere I am in the world
on a pretty limited internet connection
which has been excellent
and of course we can zoom in once we get our assays back,
see what’s going on.
We get an excellent amount of resolution,
that’s more of a hardware thing,
but the user ability of this,
of having that whole core shack altogether
is a big part of the solution
for how we want to treat and collect all of this data.
Once we get through all this MX Deposit
and what we’re collecting with Imago,
we’re wanting to get this up and visualized
as fast as we can.
So we’re dealing with a lot of drill hole data,
maybe our multi-surfaces, imagery, solids and meshes,
and we’re really looking for something
that wants a software solution here that’s quick,
that’s rapid, and we can push updates to it
and as we’re drilling
we can really get a sense of where we are
and what we’re seeing as fast as we can.
Time is everything on this,
the faster we can get our heads wrapped around
what we’re seeing,
the better job we’re going to be able to do for our clients
and be able to move our projects forward.
So we use a bit of a combination of Leapfrog and Central.
Leapfrog Geo is of course implicit modeling.
It gives us a lot of control over the data itself.
It’s very easy and simple to use.
Our field geologists can,
with very minimal knowledge about the program,
visualize, import and export viewers
and be able to really process the data
without even just using the modeling part proportions
of the software itself.
Of course, once we get into actual modeling
of solids and surfaces,
it’s an incredible software
and it’s very intuitive and easy to use.
It gives us the most nimble solution
for active exploration and visualization,
in fact, checking on all our historic data.
It’s an important part to mention
as well as we bring that data in,
we want historic data in particular,
we want to make sure we’re visualizing it
and it’s looking right and starting to make sense.
We’ve got drill holes pointing in the right direction,
and we have dips and azimuths recorded correctly.
We often end up having to revisit old logs
and make sure that we have captured things properly
where some projects might’ve been recorded negative dips
and others might have been positive dips
for, say, the inclination of a drill hole.
So that gives us a really good and quick and easy solution
to be able to verify that historic data
as we bring it all in
and start to build this robust database.
It will be the continuing factor
for us to be able to continue to visualize data
as it comes in.
Central gives us an opportunity to version track that
across multiple desktops
and then basically acts as a shared viewing platform
and a shared cloud-based solution for housing the projects,
the actual 3D projects as we move along.
Oftentimes we end up with multiple streams as we go,
multiple versions and branches of projects
as we go over time.
We might have a group modeling on resources
while we have another group importing historic data
and so keeping track of a lot of these models
is extremely difficult
and Central gives us the opportunity
to really work on that.
Now, of course, we’ve got all this data coming in
and we really want to work together
on how do we keep it cohesive?
How do we manage versions?
How do we keep everything moving forward
as we get more assays, as we get new data,
and how can we keep amending that database
with better geologic understanding
and more data as we collect it?
In our case as an organization
we often use cloud-based file servers like Dropbox
or file share a number of other ones as well,
where we have our Central cloud-based
essentially all of our project data
from reports through to all of this active drilling data
down to whatever geochemistry interpretations,
other information that we have.
We find that it integrates seamlessly with any,
it could be an Amazon web-based server or Dropbox,
but we can integrate really quickly with their base projects
that we have them all locally stored.
All of our data is kind of backed up
and routinely exported outside of MX Deposit
into our local repositories
but our primary entry points for new data
become MX Deposit and all of the solutions like Imago
where we’re actually collecting the data in here.
This helps us keep QAQC validation all the way through.
We’ve got redundancy, so if we have to roll back in time
and take a look at things,
it’s very important for us
as part of that whole data management solution.
And Central, as I mentioned,
allows us to have multiple concurrent models
referencing the same data and move them lightly across
from whether it’s the field crew
to the corporate office back here in Vancouver
or wherever we need to visualize a project.
Another important part is how do we pull that data down
once we do have it collected
and MX Deposit gives us the ability
to generate a form of a drill log,
preserving all that header information
as well as the detailed logging descriptions.
Now this is important for us
because a lot of the softwares that we’ve used in the past
don’t readily do this
and it’s an important part of filing assessment work
on claims as well as reporting
as when we’re preparing assessment reports and final reports
from the work programs that we’re doing.
We need logs in this type of format,
tabulated data doesn’t work
for a lot of the government organizations
and for reporting, of course.
We’re also able to manage a lot of the QAQC.
As I mentioned earlier in the presentation
we can really reference original
and historic standards and CRMs
but we’re able to really bring in all of the parameters
of passing and failing for standards,
how we want to treat core duplicates
whether they’re solid cores.
We can rank how things look
in terms of what is the better assays, for example,
like a fire essay or for us a digestion,
we can really drill down
and start to populate our final data.
Now, when we’re dealing with the CRMs,
we can really quickly visualize pass and fail thresholds
and bring together a lot of quick and fast decisions
on rerunning things.
And as a project evolves and as we get more data,
we can start to update these parameters
for passing and failing, for standard deviations,
just based on our populations
that we’re getting from the lab themselves.
It’s extremely seamless to export from MX Deposit
as a table form.
We basically predefine an export
so that we’re essentially clicking one button
that is exporting our collar, our assays,
our various tables that we need to populate,
geomodel and continue to push that up into the cloud
so that we can use that across platform
or across different offices as I was mentioning.
So I think one area for improvement
is going to be having integration across these.
We want to see where right from MX
it’s feeding into the implicit modeling itself
as we’re collecting the data, flagging additional errors,
and being able to just that much more rapidly
update a model.
But certainly having that version of after a certain hole
or a certain date,
we’re able to capture data with a quick export
and really continue to efficiently collect
and visualize this data.
We’re also able to
within Central Manager Licenses and Connectors,
this gives us a really interesting opportunity
to work with our clients.
Oftentimes they’ll have their own geologic teams
or just the executive team itself,
which you may want to have input on
how a project is progressing and how things are going
or just visualizing results as we get them.
Central allows us to quickly bring on other users
and we’re able to visualize and share things
in a very light branch of a model.
So we can actually share on a web-based viewer
to our clients or executive teams,
this is where we’re at, this is how this drill hole’s going,
we can add a comment, “We need to rerun some assay’s here
or let’s revisit these logs.
Lithology doesn’t seem to make sense
or maybe there’s a different interpretation than us
to go on.”
Central has been extremely useful for remote management
especially during COVID
where we don’t necessarily have
as many people on a site as we can
and travel has been more limited
but this has given us an opportunity
where I can view a project
and our field geologists can identify issues.
We can quickly click on a comment,
zoom right into an issue or problem,
a question or a point of just discussion that we might need.
It’s been well received for sure.
So in summary, looking at having a full solution
for early stage projects,
bringing in that historic data,
really we want to care about how easy and accurately
we can bring in historic data.
MX Deposit along with Excel and other access databases
has allowed us to accomplish that error checking
and most importantly referencing
original analytical software certificates
helps us reduce error and also increase the QAQC accuracy.
At the drill itself where we’re collecting
data rate from the header itself
and Imago using that to collect imagery
that we can use to quickly view.
In the core shack itself
MX Deposit is logging directly into that.
Imago to capture and catalog all these photos
and then at the desktop using Central and Leapfrog
to both visualize view and continue to model
and build that understanding as we continue to go.
And then data management helps us
or the data management solutions such as Central,
will help us continue to track models as we go along.
I think that kind of concludes how we’ve implemented
these softwares to come up with a solution
for both historic data and ongoing data collection
and I’m appreciative of the time
to be able to tell you a little bit about
how we are implementing that in the industry.
<v ->Okay, great James, thanks so much.</v>
It was great to see
everything that Longford has been up to
and sort of how those options
are letting people certainly work together
including with your clients and even colleagues at site
to solve those problems quickly.
So thanks everyone for joining us as well.
If you have any questions
feel free to leave them in the chat.
We’re actually going to move along
to the next presentation
but we will follow up with those questions
within 24 to 48 hours
so just feel free to type them out if you have them
and thanks everyone for attending.