Skip to main content
search

Variability is a phenomenon in the physical world to be measured, analysed and where appropriate explained. By contrast, uncertainty is an aspect of knowledge.

Sir David Cox

One of the primary tasks of applied geologists is to build predictive models. These vary enormously in scope and application.

The types of phenomena in geology are variable, sometimes highly, and the information available is generally sparse. There will inevitably be differences between predictions made and reality. Multiple possible geological models will honour the facts.

Geological models rely on different types of input:

1. Hard data from drilling. mapping, underground exposure in the form of:

  • Direct physical measurement of rock properties (grades, density, UCS)
  • Observations of physical attributes (lithology, colour, grain size)
  • Identification of location and orientation of boundries/contacts

2. Soft data from indirect measurement (e.g. geophysical response)

3. Interpretation, i.e. hypotheses put forward to explain the available data

The relative importance of interpretation versus hard and soft data varies, but generally, the more specific and operational the model, the more it relies on hard data. However, even in the best-informed model (e.g. grade control block models), the quantity of hard data is very small. A typical metalliferous grade control pattern only samples between 0.1% and 0.01% of the deposit. Early-stage models may have two or more orders of magnitude less.

Implicit modelling

Implicit modelling describes the distribution of a target variable by a unique mathematical function derived directly from underlying data and high-level user-specified parameters. This approach may be applied to discrete variables such as lithology, to continuous variables such as geochemical grades, or to binary indicators of continuous variables.

Implicit modelling creates a unique solution from any set of input data and given set of parameters (either geometry or a grade interpolant). The choice of parameters is clearly an important consideration in matching the character of the output model to the phenomenon being described. In many situations, the hard data will simply be insufficient to directly support creation of a model that is geologically reasonable.

Hypothesised and interpreted data

So how do we make the output of a data-driven model look like our interpreted understanding of the geological phenomenon?

The solution is to add ‘hypothesised data’ until the model adequately describes our ‘geological interpretation’.

Using Leapfrog implicit modelling, three different types of hypothesised data may be added to geometric models – structural disks (identify the location and orientation of a geological contact), polylines (identify the location and facing direction of a contact at polyline node points), and curved polylines (same as polylines but with more points to which orientations can be added). In grade interpolation models, polyline contours may be added at a given grade threshold – effectively acting as assay information.

This process of ‘making up’ data to force a geological interpretation is exactly the same as more traditional CAD-based sectional approaches. The user ‘draws’ an interpretation (usually a sectional polygon), which is then triangulated into a wireframe. The polygon mixes both hard data points (drill hole contact locations) and interpreted locations. If the data or interpretation changes (e.g. new hard data are added), drawn inputs need to be modified and the process repeated.

One of the clear advantages of implicit modelling is the separation between hard and hypothesised data. If new drilling information is added it is immediately incorporated and can be examined to decide if the hypothesis is robust (i.e. confirms the interpretation). Otherwise, the geological interpretation will need to be changed and/or the modelling parameters and hypothesised data modified.

This incremental modelling approach is very well aligned with scientific methods. Geological models (including geometry and grade models) represent hypotheses and ideas that summarise and explain available information. Before a new drill hole is drilled, the model provides a prediction, and when a hole is drilled, it directly tests this prediction.

A good model is ‘fit for purpose’, but how do we define this?

‘Fit for purpose’ means that the model meets or exceeds the user’s needs. But there are often multiple, conflicting needs. In the case of predictive models, ‘fit for purpose’ generally means that the prediction lies within an acceptable tolerance.

In the case of a numeric model of grades, ‘fit for purpose’ is a fairly straightforward concept to define and quantify, e.g. that the predicted grade of copper in a grade control pattern will be within +/- 5% of the reconciled mill grade 90% of the time.

But in geological models represented by geometric shapes purposes may be manifold – from illustrating an exploration concept for planning a drill program, to creating the deposit scale interpretation underpinning resource estimates, to defining a single domain volume. These clearly lie along a spectrum: from situations where hard data input is low and parametric choices and interpretation high, through to high hard data and low interpretive possibilities and parametric choices.

Data is a critical determinant of quality; but equally so is modelling approach, parameter and interpretation. A more useful approach is perhaps to consider all possible sources of uncertainty, and their contribution.

Sources of uncertainty affecting the ‘quality’ of implicit models:

Intrinsic variability

Variability of the phenomenon of interest at all scales from sample support, to block support, to domain.
Magnitude depends on the phenomena itself and the scale of measurement and estimation. For some phenomena (e.g. precious metal grades) this may be high, for some variables (e.g. seam thickness) low.

Physical data errors

Location
Measurement errors in collar and downhole survey measurements, errors in markup, etc.
Magnitude of error depends on the capability of the instruments
used – this should be appropriate for the requirements of the job.

Logging (subjective)
Inconsistency in logging, mis-identification.
Logging is a subjective process. In the future, it is likely that much
logging will be replaced by quantitative measurement.

Sample and analytical errors
Primary sample recovery, sampling errors during sample preparation, analytical errors.
Must be actively managed and quantified.

Mistakes
Errors in identification, recording, transcribing, retrieving, attribution. For example, sample swapping, wrong collar.
Errors of this type commonly result in RADICAL errors in models. Frequently the only way to detect such errors is by cognitive processes. Such errors CAN be eliminated. Doing so should be a major focus of data quality management.

Data adequacy

Sample spacing, distribution and orientation
Orientation of drilling with respect to key structures, sample spacing relative to volume of interest (SMU), spacing relative to important geological features/grade distribution.
May not be known until after the fact. Often decided by comparison with analogue deposits. High value in obtaining close-spaced data at early stages.

Observation biases, knowledge gaps
Whether relevant data is recorded.
As above, relevance may only become clear after time.

Geometric modelling

Choice of model
Intrusion vs vein vs contact surface, choice of drift model, global versus structural trend.
There is no objective method for guiding these choices. The decision is usually a pragmatic assessment of which choices produce the best looking results.

Choice of parameters
Compositing rules, anisotropy ratio, orientation of anisotropy, range of continuity etc.
You would generally expect these choices to be at least partly influenced by the data. In praxi, the same pragmatism as described above applies.

Grade estimation

Choice of domain to estimate
A subjective decision guided by the patterns observed in data, the notion of statistical homogeneity, and scale (splitting versus lumping).

Choice of parameters
Compositing rules, nugget/range of continuity mode, anisotropy orientation and ratio. Choice of drift model.
These choices should be at least partly influenced by the data. In praxi, the same pragmatism as described above applies.