Skip to main content

Lyceum 2021 | Together Towards Tomorrow

Geoscience data continues to transform every aspect of the Energy, Environmental, Civil and Mining industries, and we are still in the early days of the data revolution.

Historically, data has been isolated in silos across the value chain, but with significant changes in the quantity, quality and type of data that we have access to, increased computing power, cheaper and faster analysis, we can now create dynamic, data driven feedbacks to increase efficiency and reduce risk. This talk is about how we can use the data revolution to manage the lifecycle of infrastructure and assets.



Casey Mullen
Chief Architect, iTwin Platform, Bentley Systems

James Elgenes
Manager, Subsurface Data & Information, Equinor ASA

Bo Grave
Group Director, Head of Digital Business Development, Ramboll Group

Adam Pidlisecky
Associate Professor Geoscience, University of Calgary

Lucy Potter
Geology and Mineral Resources Professional


40 min

See more Lyceum content

Lyceum 2021

Video Transcript

(calminng music)

<v ->Welcome everyone to our Lyceum 2021 Panel Discussion</v>

on the Geoscience Data Revolution.

I am Adam Pidlisecky, a Professor of Geoscience

at the University of Calgary,

and I’ll be the moderator for this discussion.

Today, we were lucky to have a panel of experts

from energy, mining, civil, and environmental industries

who are going to share with us their insights

around the opportunities and challenges they see

associated with this emerging data science revolution.

With that I’d like to introduce our panelists.

<v ->Yes, hello everyone.</v>

My name is James Elgenes.

I’m the current Manager of Subsurface Data and Information

in Equinor, based in Stavanger, Norway.

My background is geophysics,

and I’ve worked in exploration, field development,

and production.

Mainly in Norway,

but also for a couple of international assets.

And some of my previous roles include

being the leading advisor for digitalization in exploration,

management advisor to the Head of Exploration,

and also project leadership within research and technology.

<v ->Hi good to be here.</v>

My name is Lucy Potter, I’m a geologist.

I spent my career working in exploration

for greenfield, brownfield deposits, advanced projects,

and operational geology

in the fields of long-term strategic planning.

<v ->Hi, good to be here.</v>

I’m Bo Grave, and I’m Head of Digital Business Development

at Ramboll, which is an engineering consultancy company.

I’m not a geologist, but I’m working with data

and data transformation and so on for many years.

<v ->Hi, and I’m Casey Mullen, and like Bo,</v>

I’m not a geoscience person,

but I’ve been learning a little bit about geoscience lately.

But I’m currently the Chief Architect

of Bentley’s iTwin Platform,

which is our platform for infrastructure digital twins.

iTwin is basically a synonym for digital twin

in Bentley speak.

But I’ve focused mainly on the data architecture

for federated digital twins

combining data from multiple sources into a cohesive whole.

<v ->Great, well thank you everybody for being here today.</v>

And so before we really get started,

I think for everybody

who maybe want to talk about and clarify,

what do we mean by this geoscience data revolution?

And from our point of view,

what we’re talking about is this idea that,

first of all, all of our industries,

whether it be energy, environment, civil, or mining,

have been rooted in data.

But one of the challenges has been historically these data

have been highly siloed,

and they would not be kind of shared across the value chain.

Coupling with that,

the idea that the amount of data that we have today,

the types of data, the quality and quantity

have vastly increased,

as well as our ability to perform analysis on these.

Compute has gotten cheaper and faster.

And that’s now setting us up

to ask some really interesting questions.

If we can organize our data better,

what new information can we get about it, get out of it?

So really for, we see this data science revolution is poised

to allow us to link data across the value chain.

And by doing that, we can open up this opportunity

to create really interesting dynamic data-driven feedbacks

that go beyond the silos, and link between the silos

so that we can start to think about

how geoscience data can become a core aspect

of managing the entire lifecycle of an asset,

so moving it beyond, say,

exploration data to production data,

to just geoscience data within the organization.

And so this presents a lot of opportunities,

but also has its challenges.

And with our panelists today,

we’d like to delve into

what some of those challenges and opportunities are,

and specifically we’re going to kind of look at three lenses.

What do we need to do around creating accessible data?

So how do we link these data up

so they are accessible to people throughout an organization?

And then once you’ve done that, what can we do?

What kind of new analyses can we perform on these data

so that we can get new insights?

So what does it do to create this uniform data structure

that allows us to pull together

new and interesting information?

And then finally, we’ll touch on a small piece around

once we have these data,

what organizations need to be thinking about and doing

to make sure that they’re ready to action

these new insights.

And so with that, why don’t we get started and jump into,

what are the opportunities and challenges

around organizing our data?

And I’m going to start with Casey on this one

as the infrastructure world

has really been thinking about this for quite some time

and is quite advanced.

So I’d love to, Casey, hear from you

about your experience in terms of moving from

digital twins as they’ve gotten more complex,

and what were some of the considerations

and some of the challenges?

<v ->Right, so in your intro, I think you really hit</v>

on a lot of the key subjects,

which are taking what have been historically data silos,

and by coordinating them together,

aggregating data, federating data.

Also, there’s a lot of importance

on making sure that it’s aligned,

both at the level of spatial coordinate systems

and geospatial coordinate systems,

as well as sometimes aligning semantics

from different data sources

that are coming from different formats, let’s say.

But Bentley’s vision of a digital twin, our iTwin,

is all about taking disparate kinds of data

and bringing them together.

So we focus a lot on engineering data,

both 3D engineering models,

as well as 2D engineering drawings, GIS kinds of data,

as well as data that’s taken from real observations

of the world,

so we take photographs and photogrammetry,

as well laser scans and point clouds,

and combine that to create what we call reality models

that are sort of mesh models of the surfaces of things.

And combining that also with time series data.

So we see tremendous opportunity in bringing geo data

into that mix, geo data of various kinds,

from boreholes to the geological models,

because Bentley focuses on infrastructure

and the subsurface is literally the foundation

of the world’s infrastructure,

and is important for its resilience

and its sustainability over time.

So the ability to bring that subsurface data together

in the context of the engineering data,

and in the context of other direct observations of reality,

we get a true situation where the whole is greater

than the sum of the parts.

And so by being able to see the engineered infrastructure

in context with the subsurface,

it brings more value to both of those,

and opens up lots of opportunities for data analysis.

The first kind of insights that you gain

are just from being able to see it all together,

to have an engineer, or an owner or operator of a mine,

or of some infrastructure to be able to see

all of that data in one place.

But by bringing it together,

we also open up the ability to,

if everything’s in the same spatial coordinate system,

we can do queries to find out

what engineered infrastructure is in proximity

with which areas of the geological material.

We can bring the data together

to make it available

for machine learning kinds of algorithms

that can perhaps increase the accuracy,

decrease the uncertainty around areas

of the geological model.

So I’ll stop there,

but hopefully that gives the gist

of the kinds of things that we’re endeavoring to do

to break down those silos and bring the data together.

<v ->Thanks Casey.</v>

So I’m going to ask one quick follow-up before I jump over

to one of our other panelists.

And you know, you’ve been doing

this digital twin infrastructure,

digital twin for quite some time as an architect

and looking at the data types

and thinking about how to bring it together.

And cognizant that you’re just now getting into,

how do we bring in the geoscience data,

how do you see your first blush,

do you look at some of the geoscience data

and think to yourself,

I need to think about this differently?

Where are some of the similarities or differences there?

<v ->Well, there’s, the geo data,</v>

there’s different kinds, even within that broad area.

And so some of the data fits pretty well

within what we would consider our engineering models.

So borehole measurements that are very precise,

and we know exactly where they are,

and have very precise measurements

of how the geology changes along the borehole,

those can fit fairly well in with our engineering models.

There’s other kinds of models

that are sort of different animals.

And that we would anticipate bringing them in

to the digital twin as it’s own kind of thing.

So the geological models that are taking boundary conditions

that have been measured,

and then interrelating between there

to figure out what we probably think

the geology looks like at a given point,

that’s a different kind of thing

than some of the models we already have incorporated

into iTwins.

And so we’ll be bringing them in as new kinds of data.

And so that’s the part of the beauty and the vision

of a federated digital twin concept

is that when you have a new kind of data,

that really brings something new to the table,

you can incorporate that, you align it where necessary,

let’s say along spatial dimension,

but then just add it to the mix

so we can visualize it

along with the other kinds of models

that we can already visualize.

<v ->James, over to you on the Equinor side</v>

and sort of the broader use of data now,

how we’re transforming it in the energy industry.

<v ->Yeah, sure.</v>

Well I think I’ll start off by just talking about

a couple of the challenges,

and at the start, Adam, you mentioned that the data volumes

and the variety of data is sort of,

we’ve never seen so much.

And actually I’m not too concerned

about the volumes of data,

I’m going to sort of try and give a little bit

of a different take on this.

I think for us to succeed with data,

it really does come down to people and culture.

Just as an example, we can talk about

adherence to standards.

Standards can help us compare data

across geographical areas,

enable us to understand what data we have,

the lineage, the quality through the metadata.

But we suffer a little bit, you know, as an industry,

but also just in our company in general,

one asset, one field might have different standards,

naming conventions, metadata from the one next door.

And it’s really difficult to police these standards

’cause data are really seen as not the means to an end

and not necessarily treated like an asset.

An important part of delivering projects and interpretation

should be taking control of the data,

delivering it with the right format, in the right place,

with the right naming standard,

annotated with the right quality metrics.

And if we don’t do that,

then data will actually end up,

rather than being an asset,

it will erode through time

because it just takes up disk space,

and ends up taking over a lot of people’s time.

So I guess this is about culture.

It’s about leaders actually incentivizing the use of data,

and it’s about employees actually building

the competence and understanding the standards

so they can share the data across.

So that’s a big challenge,

and then there’s also challenges around legal risks and IP,

understanding our data rights,

which is maybe slowing us down in the move to the cloud.

But I won’t focus on that for now.

But I think there are so many opportunities,

and as a company who are still working,

exploring in very mature areas,

I think now more than ever,

data are going to play a really important part

in our strategies.

And I see big opportunities coming from geoscience data

that’s been typically underutilized.

So, we typically use poststack seismic data

and well log data in our analysis.

There’s so many other wonderful data types out there

like mud gas data, and cuttings data,

and biostratigraphy and geochemistry.

And really, these data types

have never been analyzed together side by side before.

And I think this is a big opportunity

where we can leverage the power of cloud computing,

and open source data, and APIs

to start really linking these data types,

and even sort of questioning some geoscience applications

and new ways of thinking about things.

And we’re seeing some real tangible examples

that new technology can help us do this.

And in a way, it’s slightly back to basics.

We just want deliver data that we already have

to the fingertips of our geoscientists

to enable them to do the work that they should be doing.

But instead, in the past,

they’ve probably spent too long themselves

searching for the data, but I think we’re really now seeing

we can crack that nut.

<v ->Great, thanks James.</v>

And one follow up on that,

really glad you brought up the standards piece,

because it’s clearly a big part of this equation

of organizing information well.

What’s your feeling on the industry role

as opposed to the corporate role,

and how do we, which one’s more important,

and how do we move them forward effectively?

As in, does the industry have responsibility

for setting the standards,

or should we just be boxing on

with doing it at a corporate level?

<v ->Yeah, no, I think if you asked this question</v>

three, four years ago,

you might’ve got a different answer,

but now we have something called

the Open Subsurface Data Universe, or OSDU,

and this is a really key thing.

It’s about collaborating and standardizing on areas

where we don’t need to compete on

across the different companies.

So things like data models and architecture,

infrastructure, APIs, applications,

these aren’t areas for competition,

these are areas we should compete.

And that includes standards.

And I think it’s now a case of collaborating

across the industry to get these things right.

And it’s a big task of bundling together

all of these different standards

and trying to come up with one certain way of doing things.

But it’s so important, standards in general.

And speaking to geologists,

sort of internally in Equinor,

there’s so many different names and words

for phrases, and rivers, and channels,

and everyone uses different terminology.

So it’s really hard to sometimes understand

the like for like and do analysis across.

So we really need to crack this one,

and I think OSDU is the way that we’re going to do it.

I’m pretty confident of that now,

it’s looking like it’s maturing really nicely.

<v ->Well thanks, and so Lucy, jumping over to you</v>

having worked across geology and mineral extraction,

what have you seen

as some of the opportunities and challenges

around organizing our data, and where we need to take that?

<v ->Well, I completely agree with James</v>

that it’s all about people.

I think that we have a lot of good people

working in the industry,

but they’re spending far too much time processing data,

crunching data when we have the tools to do it for us.

So we have a lot of great remote-sensing technology,

a lot of instrumentation to collect the information.

We now have the ability to use things like machine learning

to crunch the data for us and to give us outputs.

But we need to put that data in the hands of our experts

and allow them to make interpretations,

and to link the output from machine learning,

from these reams and reams of data

with geology and with the rocks.

And I think this concept of the mine value chain

is a really funny one.

We think of it as being extremely linear, right?

We start with exploration, and we end up with the metal

at the end of the day, when in fact it shouldn’t be a chain,

it should be a network.

And we need to all be using the same data.

We’re working with the same minerals, the same rocks,

the same metals, so we need to be using the same data,

have a common language, a common file format

to really be able to extract the most information from it.

I think we also can all see the shortcomings

in the human resources, so we don’t have enough people,

enough qualified people in our industry at the moment.

We need to bring in new people, younger generations,

and we also need to bring in people from other fields.

Mathematicians, electrical engineers,

and computer scientists to help us use these tools

and really extract the maximum amount of information

as we can.

I think in the exploration field,

the scarcity of data or the irregularity of data

is a huge problem.

We tend to drill and collect information

where we know we have good potential.

What we don’t know is the deeper subsurface area.

We have limited depth penetration with many of our tools,

and we haven’t been able to perhaps

extrapolate our information at surface,

and utilize this very patchy data

to be able to make predictions for the deeper discoveries

that we have yet to find.

<v ->Great, thanks for that.</v>

And just digging in a little bit deeper

on the sort of human resources piece there,

that was a really interesting comment about,

you know, we often talk about the data being this thing,

and we know we need to change culture,

but I guess you’re also saying

we probably need to bring some new skill sets

into the equation.

Is that, do you find a lot of your colleagues

are kind of recognizing that,

and what do we need to do as an industry

to encourage that

and bring in these kinds of new skill sets?

<v ->Yeah, that’s a tough one,</v>

because I think we need to attack it on all fronts.

So we need to make sure that we’re retaining people

who are already in the industry

by providing them new challenges,

providing them the tools and the connections that they need

to do their job well.

I think we need to retain the older generation,

or people who are nearing the ends of their careers

because they have the most wisdom, let’s say,

they’ve seen the most drill holes,

they’ve seen the most deposits.

They’re very familiar with the geological concepts.

We need to retain that.

And we also need to attract younger generations

and show them the wide variety of fields

that they can practice in within the mining industry.

And there’s another party, I think,

that needs to be more involved,

which is on the indigenous groups and our local stakeholders

is that they quite often don’t have the same accessibility

to the data that we do.

And I think that that’s a partnership

where we need to develop,

we need to give them the tools and the skills

to see the data,

to see the opportunities, and to work together

to make sure that we extract the best value from the assets,

but also providing back and working with our local partners.

<v ->And so jumping over to Bo then.</v>

Bo, just like to hear from you,

sort of the consultancy level,

and particularly, I guess,

in the European framework as well,

what do you guys see

as some of the opportunities and challenges

around starting to get data organized

so they’re accessible to a wider array of your engineers?

<v ->Well first of all,</v>

I see an opportunity for developing new services

and really utilize the data that are available.

How can we provide new services and new insight,

not the least.

A lot of our, when you come into operation and maintenance,

and how some of these construction works

for 20, 40, 30 years, what is happening during that phases?

So linking the geological data, with the structural data,

with weather and whatever, if it’s offshore, the ocean data,

and so on, linking all that together,

joined understanding of how these construction,

for instance, and live throughout a lifetime,

that could be like new service and new insights

that is difficult to get today.

So that could be some of the things,

some of the opportunities,

there’s a large group of opportunities like that.

Some of the things we have seen

is the discussion about ownership of the data

and how can we share data,

and how open and willingness are people to share data.

That’s, I think, at least we need to agree on

some game rules on how to work in this field,

who can have access to what data I can understand

that there could be some things about competition,

or other things that means limitations

who have access to these data.

But in general, I think if we should have the big benefits

of we need to share data,

and have ways, and people have already said standardization.

And I think that’s a very important.

<v ->Great, thanks for that.</v>

And maybe just, if we could delve a little bit more

into some of the experience around that ownership piece,

and I think it’s,

particularly we’ve got these different industries

represented on this panel,

it might be in the mining industry

where you own an asset and all of the data

sort of associated with that asset,

the ownership is clear,

but I guess when you get into the engineering applications,

where you might have more of a joint venture situation,

and be working with collaboratively with other companies,

how do you navigate that data ownership piece?

<v ->Well I think right now,</v>

it’s more like a discussion from project to project

kind of discussion.

Every time we talk to a client,

how to work with this and how can we handle these issues.

Some of the data we collect for the clients,

and of course it’s the client who owns the data,

but also if we need access from us and so on.

So right now, it seems more like

being a project to project situation,

and I think we need good and best practices.

<v ->Right, so hopefully as these projects evolve,</v>

a pattern emerges,

and yeah, we can move to some kind of best practices.

Great, well thanks for that.

So now we’ve talked about a few of the challenges around

how to organize these data,

I guess if we move to this next step

around sort of advanced analyses,

one of the things, a question I have is,

you know, is there a bit of an iterative loop here

by showing people what we can do with the data,

do we motivate people to get better

about their data hygiene, as James was saying,

we need people to be committing to some standards.

So maybe I’ll start with James and the question around,

can you speak to any emergent analysis that you’ve seen

or been able to show other people that have said,

okay, now I get it,

now I understand why this is so important.

<v ->Yeah, I think so.</v>

I think we have quite a lot of ongoing

machine learning workflows that we’re doing.

So looking at a lot of well data

and basically trying to integrate

a lot of different data types

and come up with understanding of the petroleum system,

maybe there’s some missed hydrocarbons

that we haven’t seen before.

And what’s been really important

is building up those databases

so we can actually leverage the data and do this analysis.

And that’s really the hard work.

That’s 95% of it.

And I think we’re quite fortunate sometimes

because we have our home on the NCS really,

that’s our laboratory where have,

we actually have pretty good data,

and we’ve built up some databases over like 30, 40 years.

We call them sort of our gold standard.

It is really a competitive edge,

which has correct reference data,

they’re correctly aligned,

the naming conventions are right.

And we can really use those data

to create machine learning models.

So we can use the interpretations as labels,

we can build consistent predictive models,

and then we can do the analysis.

It’s sometimes the problem is

if we’re going outside of the NCS,

which is again, is our real core, core area,

then we don’t necessarily have such good quality

in those other areas,

and the data standards and the alignment across

hasn’t been so good,

but we can use the NCS examples

as almost a way to incentivize the correct procedures

and the way of doing things in these other areas.

And it is working out like that,

if you have some international assets

and they’re seeing all this exciting work on the NCS,

and it is exciting.

A lot of data conditioning, processing, machine learning,

derived predictions,

these things are only possible

if you have a good starting point.

So definitely, I think it’s important to continue using

good use cases to then incentivize others to do the same.

<v ->Great, thanks for that James.</v>

And yeah, I think that point about a gold standard

is really an important one,

and it’s certainly one

that I think a lot of other industries are challenged with,

and maybe just with that, we’ll jump over to Lucy

’cause I would love to hear her kind of response to that

and sharing of her own story,

because I think some of the,

getting gold standards in the mining industry

can be challenging.

<v ->Yeah, thanks Adam.</v>

Unfortunately, I think in the hard rock mining industry,

we tend to be a little bit slow adopters on some fronts,

and machine learning and automation is one of those.

So in the field of geostatistics,

we’ve known about measuring or quantifying uncertainty

for many decades now,

and yet we’re still not routinely using it

because there’s this fear of the unknown.

It’s not familiar, it’s not as tangible for geologists,

so they hesitate to adopt.

And I think we’re seeing the same thing

with machine learning at the moment,

which is that we’re hearing it as a buzz word,

but because people don’t quite know how to use it,

or how it connects to the rocks,

how it connects to the alteration they’re seeing

in the field, to their asset grades

that are linked to a certain deposit style,

they’re uncomfortable with it.

So I think what we need to do is,

first of all, make it much more accessible

so that the common everyday geologist

is familiar with machine learning,

understands that it’s a tool.

It’s one of many tools in their toolbox,

and then that they see a few success stories

where machine learning has been used in the industry,

and it’s been able to identify

either deposits that are known,

but without necessarily flagging them as such,

or new discoveries that I know there are some

very advanced groups in the industry that are working on.

And I think that’s going to help

kind of bring us forward in the next leap,

because there is a little bit of a stalled momentum now

as people are wondering,

how do I use this in my everyday job?

How do I make this big leap?

And we also have to think about the reporting codes.

So, we’ve looked at implementing

random forest machine learning techniques.

What does the CIM say about this?

What does the JORC code say about this?

And we have to be cautious,

and using those new techniques in the way that we report

as to be sure that we properly understand them

before implementing them.

<v ->Great, thank you.</v>

Bo, I’m going to jump over to you on this,

and maybe pull on a thread that you put out there earlier.

You know, I think within the resource world,

we’re saying, how can we use this advanced analysis

to better understand our resources?

And you commented on that bringing data together

gives you an opportunity to provide new services,

which is, so you’re kind of saying

this is going to help us change what we do.

Can you comment a little bit more on that,

like where you see some service opportunities,

and also how the organization looks at that,

because that’s obviously a change in business as usual.

<v ->Yes, it’s a change in business.</v>

I don’t know much about mining industry or anything,

so I will try to stay out of that problem,

or stay out of exactly those issues.

But if I see some of the offshore things were done

in offshore wind or in other offshore construction,

we can actually now, from data, from the digital structures

or from the actual structures that is out there,

we can use sensor information

to predict the fatigue problems earlier and foreseeing

where the issues could be,

making what we call true digital twin

of the actual construction, see how much lifetime is left.

That’s a new service

that is kind of more detailed and precise predictions.

And I don’t know if you can do similar within mining,

or some of the other areas we’re talking about here.

<v ->Yeah, so that seems to me like a one</v>

that completely changes your business model,

which is always a good, if it’s positive,

it’s always a good incentive for change, isn’t it?

Casey, coming back to you, just closing out this thing

on the new types of analysis,

you mentioned one of the first things with a digital twin

is getting stakeholders together

to be able to see all of this data.

Just want to kind of see if you can elaborate on that

and talk about some specifics where you’ve seen that

sort of be transformative in helping people adopt

the digital twin model.

<v ->Well, certainly in the infrastructure world,</v>

we’re dealing with the entire lifecycle of infrastructure.

So from the various stakeholders involved in the design,

in the construction,

and then the operation of infrastructure.

And so digital twins can be very helpful

in communication across those different lifecycle phases,

and among all the participants within those phases.

So particularly in the area of handing over data

from a construction process,

taking the as built models in handing them over

to the operations folks,

I could see that being most relevant in the mining world

for with relation to geoscience data.

You can imagine the owner of the mine being,

have a digital twin of the subsurface,

and then the infrastructure that they are going to build

to extract from that subsurface

is going to enhance their communication,

is going to enable them to make better decisions.

They’ll get a better feedback coming through the model,

so by the digital twin

being just a true reflection of reality

that is not necessarily catering

to one specific specialty or one specific stakeholder,

it does become the place where all the stakeholders

can come together.

If I could, I know James, and Bo, and Lucy,

everyone has talked a little bit about standards,

and it’s interesting to see that the geoscience world

has some of the same challenges

that the engineering world has

in terms of semantics and data structures

can vary all over the maps.

And so I wanted to highlight that that’s something

that can also happen as part of bringing data

into a digital twin is,

if you have some of these standards

that are industry standards,

but not must not necessarily follow it on every project,

bringing data into a digital twin

is an opportunity to align the semantics.

So there can be heavy lifting in terms of mapping of data

to different semantics,

but it can be an opportunity

to do a little bit of translation, a bit of mapping

to align data that was previously unaligned.

I know that’s a big part of what happens

in the engineering world

is taking data and aligning it

as you are bringing it into the digital twin.

So I think there are probably similar opportunities

in the geoscience space, just as we, like Bo,

I’m looking at more analogies

than a direct experience of geoscience,

but we’ve had a lot of value from using machine learning

to just recognize things that we observe in the model.

And so I can imagine machine learning analysis

that is looking at core samples

and being able to help have more consistent recognition

of what’s in those, as well as machine learning algorithms

that are learning how ore bodies are formed,

and to be able to do a really good job

of assisting to interpolate

what’s the full extent of the ore body

based on limited measurements and samples of it.

So it seems like there’s a lot of room

for using machine learning

to help decrease the amount of uncertainty

that you have around,

what’s really down there in the subsurface.

<v ->Thanks, Casey.</v>

And unfortunately, we’ve lost Bo,

so just looking, couple minutes left to close out.

So, and I really like, Casey, what you said there

about sort of the embracing a digital twin

is sort of focuses this idea of going to standards.

Like it kind of becomes non-negotiable to some degree.

So just with the last couple of minutes,

I’m going to close off with James and Lucy.

So James, from your point of view,

one thing you could change organizationally

by motivating people, bringing in standards,

what would be the thing you would do tomorrow

if you could wave a magic wand

and to accelerate this data transformation?

<v ->I think again, I need to bring it back</v>

to people, culture, and competence.

Even if we have all the new data in the cloud,

and it’s ready, and it’s available,

we still need people to learn about the data

’cause we were getting access to data

we’ve never really got our hands on

beyond specialist groups.

So all the data types I mentioned earlier,

cuttings, and then gas logs,

and biostratigraphy, and geochemistry.

It’s all great, but we need people to know how to use it.

And as well, more of our data

will be machine learning derived, right?

So we need people to understand a bit about

how the data were derived,

and you don’t really need to understand

all the details of an engine to drive a car,

but it helps to know a little bit,

and it’s the same for machine learning,

just knowing the basics so you can validate your model,

bringing in the expert geologist

so they can validate the models and things,

that gives a bit more certainty

to the data that you’re going to use.

And the other thing we need to change,

we can change the data culture,

and the third pillar here is the technology.

We do have old and clunky monolithic applications

that don’t necessarily link up too well

with sort of new API and cloud services.

So until our data, our technology,

and sort of people, culture,

we need to re-engineer basically everything,

and that’s where all the real value is in the cost savings.

<v ->Thanks.</v>

Lucy, from your point of view in the mining industry,

if you can change one thing,

what’s going to be the best bang for the buck?

<v ->Yeah well I mean, again, I’d have to agree with James.</v>

It’s all about the people.

And one of the ways that we can give those people more time

to do the good work that’s required

in interpreting all this data is by automating processes.

So geologists spend an inordinate amount of time

moving files, uploading, downloading, renaming,

fixing triangles, cleaning data,

doing many, many things that are repeated again and again

on a monthly process, or on a weekly process

for different iterations of models.

So if we can automate a lot of those processes

and really focus on interpreting the results,

it’s going to save us a mountain of time,

and then we can do better work at the end of the day.

And I’ll just raise a final point,

which is that I think this data revolution

is going to help us and should help us

to look at risk and to manage risk

a lot better than we currently do in our industry.

We don’t have enough tools to measure risk,

and also to look at compound risk

or how risk is interrelated between the financial aspects,

the metallurgy, the geo-tech, safety risks.

We spend a lot of time talking about it,

but we’re not at the point yet

where we’re able to quantify the interconnectedness

of that risk,

and then to apply that for corrective measures

to reduce the risk going forward.

<v ->Great, sounds like some great insights.</v>

Bo, you’re back with us,

so 30 seconds closing out.

The question is, if you can wave your magic wand,

what would it be to accelerate

the digital transformation or this data revolution

in the consulting world?

<v ->I think some of the other things</v>

is we also have to look at people, and our employees,

how can we actually utilize this?

Do we need more sciences?

Do we need more insight into data, people who can do that?

I think that’s one of the ways at least I see

that would help a lot to progress down that path.

<v ->Great, so it’s definitely a people problem.</v>

And Casey, do you want to put the last word in there?

<v ->Well, I would say, the technology is, as Lucy said,</v>

there’s a lot of new things,

people maybe have some trepidation,

some new things to learn, but get started.

The best time to have started down this pathway

was yesterday.

But you can always start tomorrow.

The sooner you get started,

you’ll begin bringing your data quality

and your data alignment issues to light.

As I think you mentioned earlier, Adam,

that people start just seeing the data together

and see like, oh, we do have some data alignment problems,

we do have some issues,

and you start the feedback loop

when you increase the visibility.

So do the first step.

Start dealing with one kind of data, and then add the next.

Don’t wait until you have the complete solution.

You want to be able to incrementally build

your digital twin over time,

and you will learn every step of the way.

<v ->Yeah, that’s great advice.</v>

Perfection is the enemy of success, isn’t it?

Great, well I really appreciate everybody’s time today.

This has been a really stimulating conversation.

I think it’s so great to get people

coming from these related, but unique industries,

and we can see the common pieces,

but also where there’s some disparate parts to it as well.

So very much appreciate you all

being part of this panel discussion today,

and thank you very much.

(calming music)