You are on page 1of 100

QUALITY CONTROL OF ASSAY DATA USING DUPLICATE DATA

By
A. J. Sinclair
Professor Emeritus
Geological Engineering
Dept. of Earth and Ocean Sciences
The University of British Columbia
6339 Stores Rd.,
Vancouver, B. C. V6T 1Z4
e-mail: ajsincon@shaw.ca

May , 2005
Revised February 2009

SECTION 1 1
1.0 INTRODUCTION........................................................................................................................................... 1
1.1 Recent Perspectives ..................................................................................................................................... 1
1.2 ISO 9000 and Other Quality Systems.......................................................................................................... 1
1.3 Appraisal and Quality Control Plan for Resource Estimation ..................................................................... 2
1.4 Salting.......................................................................................................................................................... 2
1.5 ExampleBre-X ......................................................................................................................................... 4
1.6 General Precautions..................................................................................................................................... 8
1.7 Course Illustrations Using P-res Software................................................................................................... 8
SECTION 2 .......................................................................................................................................................... 9
2.0 STATISTICAL PARAMETERS COMMONLY USED IN ERROR ANALYSIS ........................................ 9
2.1 Introduction ................................................................................................................................................. 9
2.2 Measures of Central Tendency .................................................................................................................... 9
2.2.1 Arithmetic Mean................................................................................................................................. 10
2.2.2 Median.................................................................................................................................................... 14
2.3 Measures of Dispersion (Spread) of Values .............................................................................................. 14
2.3.1 Introduction ........................................................................................................................................ 14
2.3.2 Variance.............................................................................................................................................. 14
2.3.3 Standard Deviation ............................................................................................................................. 16
2.3.4 Standard Error of the Mean ................................................................................................................ 16
2.4 Coefficient of Variation............................................................................................................................. 17
2.5 The Simple Linear Model.......................................................................................................................... 18
2.5.1 Introduction ........................................................................................................................................ 18
2.5.2 Assumptions Inherent in a Linear Model Determined by Least Squares ........................................... 19
2.5.3 A Practical Linear Model ................................................................................................................... 20
2.5.4 Choice of an Estimation Method. ....................................................................................................... 20
2.5.5 ExampleSilbak Premier Gold Deposit............................................................................................ 22
2.6 Displaying Datathe Histogram .............................................................................................................. 24
2.7 Displaying DataScatter (xy) Diagrams .................................................................................................. 26
2.7.1 xy Plots Using P-res ........................................................................................................................... 26
SECTION 3 ........................................................................................................................................................ 28
3.0 STATISTICAL TESTS COMMONLY USED IN TREATING DUPLICATE AND REPLICATE
ANALYSES ........................................................................................................................................................ 28
3.1 Introduction ............................................................................................................................................... 28
3.2 F-testComparison of Variances ............................................................................................................. 29
3.3 Students t-testcomparison of means ..................................................................................................... 31
3.4 Paired t-Tests ............................................................................................................................................. 32
3.4.1 Example: Silbak Premier Blasthole Data............................................................................................ 35
3.5 Significance of r, the correlation coefficient ............................................................................................. 36
3.6 Statistical Tests Involving the Linear Model............................................................................................. 36
3.6 1 Significance Test ................................................................................................................................ 38
3.6.2 ExampleSilbak Permier Gold Deposit............................................................................................ 38
SECTION 4 ........................................................................................................................................................ 41
4.0 PRACTICAL MEASURES OF SAMPLING AND ANALYTICAL ERRORS .......................................... 41
4.1 The nature of errors ................................................................................................................................... 41
4.2 Relative Error ............................................................................................................................................ 46
4.3 Mean Absolute Difference......................................................................................................................... 47
4.4 Thompson and Howarth Error Analysis.................................................................................................... 48
4.4.1 Assumptions ....................................................................................................................................... 48
4.4.2 The Method ........................................................................................................................................ 48
4.5 The Simple Linear Model (Sinclair and Blackwell, 2002) ........................................................................ 50

SECTION 5 ........................................................................................................................................................ 53
5.0 SOURCES OF ERRORS: SAMPLING, SUBSAMPLING AND ANALYSIS............................................ 54
5.1 Introduction ............................................................................................................................................... 54
5.2 Sampling.................................................................................................................................................... 54
5.2.1 Chip Samples...................................................................................................................................... 56
5.2.2 Channel Samples ................................................................................................................................ 56
5.2.3 Drill Core Samples ............................................................................................................................. 56
5.2.4 Drill Cuttings Samples........................................................................................................................ 57
5.3 Subsampling .............................................................................................................................................. 61
5.4 Pierre Gys Fundamental sampling Error .................................................................................................. 62
5.5 Analysis ..................................................................................................................................................... 66
Figure 5.10: Scatter plot of thirty one pulps analyzed by both NiS and Pb fusions for Pd. ........... 69
5.5.1 Metallic Assays................................................................................................................................... 69
SECTION 6 ........................................................................................................................................................ 70
6.0 MONITORING AND QUANTIFYING DATA QUALITY ........................................................................ 70
6.1 Introduction ............................................................................................................................................... 70
6.2 Dealing With Standards............................................................................................................................. 70
6.2.1 Introduction ........................................................................................................................................ 70
6.2.2 Blanks ................................................................................................................................................. 73
6.2.3 Monitoring Subsampling Contamination............................................................................................ 74
6.2.4 Monitoring the Analytical Environment............................................................................................. 74
6.3 Laboratory QA/QC Procedures ................................................................................................................. 75
6.4 Duplicate QC Data During Exploration/Evaluation .................................................................................. 75
6.4.1 Sampling Plan for Quality Control ..................................................................................................... 76
6.4.2 Data Editing........................................................................................................................................ 76
6.4.3 Principal lab........................................................................................................................................ 76
6.4.4 Check Lab........................................................................................................................................... 76
6.5 Interpretation of Results ............................................................................................................................ 77
6.6 AN EXAMPLE OF A COHERENT DATA SET ..................................................................................... 78
6.6.1 Introduction ........................................................................................................................................ 78
6.6.2 Estimating Analytical Error of the Due Diligence Lab ...................................................................... 78
6.6.3 Estimating Analytical Error of the Original Lab ................................................................................ 80
6.6.4 Sampling Error ................................................................................................................................... 82
6.7 SOME COMMONLY ENCOUNTERED SITUATIONS ........................................................................ 85
6.7.1 HALF-CORE VERSUS QUARTER CORE ...................................................................................... 85
6.7.2 DATA SPANNING A VERY LARGE RANGE (Perhaps several orders of magnitude) ................. 87
6.8 ANALYSES BY ALTERNATIVE ANALYTICAL METHODS ............................................................ 88
6.9 LACK OF CONSISTENCY IN DUPLICATE DATA ............................................................................. 90

SECTION 1
1.0 INTRODUCTION
1.1 Recent Perspectives
In recent years there has been a strong international move toward knowing and improving the quality
of information used in the mining industry for resource/reserve estimation. In Canada this trend has been
accentuated because of recent, highly publicized scams that involved contamination of samples so as to produce
assay results far above the true metal contents of the samples in question. One important aim of quality control
procedures is to minimize the likelihood of such scams so that the public is not misled as to the economic
potential of a mineral deposit. In addition, quality control procedures serve the technical purposes of identifying
sources of and quantifying both random errors and unintentional bias in sampling, subsampling and analytical
routines and thus provide the basis for improved procedures of data collection that translate into improved
resource/reserve estimates.
One of the important reactions in Canada to recent mining scams has been the implementation of what
is known as National Instrument 43-101 (NI 43-101) in which a wide range of requirements relating to the
publication of assays and resource/reserve estimates, are laid out. These requirements identify a Qualified
Person (QP) who is responsible for all technical matters related to obtaining and publicizing both assay data and
resource/reserve figures. This course incorporates a variety of procedures designed to fulfill the requirements of
NI 43-101 insofar as standard, blank and duplicate samples can be used to define and monitor quality of assay
values that are the basis of deposit evaluation.

1.2 ISO 9000 and Other Quality Systems


The International Standards Organization (ISO), a worldwide federation of more than 110 national
standards bodies (including the Canadian Standards Association, CSA) based in Geneva, Switzerland, has, over
the past few years, developed a variety of widely distributed publications dealing with quality control systems
for a wide range of industrial settings (e.g., CSA references). The various proposals in these publications are
general in nature and must be adapted through the development of detailed quality control schemes for specific
industries. For example, in resource/reserve estimation these schemes might include the following matters:
1. Targeted quality objectives
-Resource/reserve estimates within 10% of reality
-Must explain how comparison will be made because a simple comparison of
production with estimated reserves may not be a fair test.
2. Attributing responsibilities and authority.
Chief assayer
Chief geologist
Chief engineer
Competent person

3. Specific and documented procedures, methods and work instructions for individuals in 2,
above.

4. Testing, controls, check and audit programs appropriate to the various work stages.
5. A method to achieve any necessary changes and modifications to the quality plan as the
projects advance.

QA-QC in Mineral Exploration

6. Any other measures required to reach the quality objectives.


The forgoing matters are discussed in greater detail in the CSA references and by Vallee (1998) and Lemieux et
al (2003), among many others.

1.3 Appraisal and Quality Control Plan for Resource Estimation


The application of the ISO standards to resource/reserve estimation procedures necessarily involves all
steps of the published procedures. Too often quality control is thought of only in terms of quantitative
measurements i. e, assay data. A broader perspective is essential and must include the categorical and
qualitative data that are inherent in geological studies. The general aspects of resource/reserve estimation for
which quality must be considered are:
1.

Acquisition of geological information

2.

Geological models of the mineral deposit

3.

Sampling procedures (including duplicate samples)

4.

Assaying procedures (including duplicate analyses)

5.

Validation of geological and sampling data

6. Exploration data evaluation


7. Resource/reserve estimation procedures (including at least preliminary geostatistical
approaches).

8. Criteria for resource/reserve classification.


The use of duplicate data, as a means of knowing and monitoring quality of assay data for samples, is
seen to be a part of the overall process of quality control for the entire estimation undertaking. In this overall
structure, an appropriate person or persons must be designated to undertake the tasks of monitoring assay data
quality and maintaining an acceptable level of quality. Periodic audits by both internal and external experts are
an important aspect of maintaining an acceptably high quality of assay data. As Rogers (1998) states ..quality
assurance policies and quality control procedures involve a continuous monitoring of work processes and
information flow pertaining to mineral deposit exploration/appraisal projects. and the information base is a
constantly changing one and regular updating of mineral deposit models and methods of data treatment must
be subjected to both internal and external (independent) audit intermittently.
The details of sampling methodology are not a component of this course. Nevertheless, assay data are
obtained for samples and anyone involved in quality control of assay values must clearly understand all steps in
the sampling, subsampling and analytical protocol. Hence, a general discussion of various types of samples is
warranted, partly for the sake of uniformity of terminology and more fundamentally because of the importance
attributed to sample assays. Too often a sampling procedure used in one deposit is transferred without testing
its appropriateness to another deposit. Sinclair and Blackwell (2002) recommend that sampling experiments be
conducted to test the quality of various sampling approaches, with the ultimate aim being to adopt a sampling
methodology that is of adequate quality and is appropriate to a particular deposit..

1.4 Salting

QA-QC in Mineral Exploration

Salting, the surreptitious introduction of material into samples (McKinstry, 1948), has been with us
since the time of Agricola (Hoover and Hoover, 1950). Salting of possible ore material is an unfortunate
occurrence in the evaluation of some deposits and its presence or absence must be assured during the process of
data verification. Such invalid data cannot be recognized by the use of blanks, standards or reference samples
but must be checked by a program including examination of sample rejects as well as resampling. Salting by
the addition of particulate gold to samples of gold deposits at some stage in the subsampling procedure is
unlikely where very low grades are reported because such salting normally leads to an abundance of moderate
to high values. Consider the following hypothetical example: a single particle of gold in a subsample to be
assayed generally will lead to an assay well in excess of the 0 to 4 g/t range. One grain of gold in a subsample
consisting of 1 million grains (gold grain assumed to be the average size of non-gold particles) is equivalent to
an average grade (for the 1 million grain sample) of about 7 g Au/t.
Details of the calculation: 1 grain of gold is 19/2.8 times as heavy as an equivalent sized grain of most gangue minerals. Consequently, the 1
million grain sample contains about 19 grams of Au per 2,800, 016.2 grams of sample (i.e., about 7 grams Au per 1 million grams of sample
or 7 grams per tonne).

The surest method to verify that salting has not occurred where half core samples are involved, is to
undertake duplicate sampling of the other half of the core and compare the assays for the second set of samples
with the previously reported data (e.g., McKinstry, 1948). Of course, this approach assumes that the second set
of half cores has neither altered due to natural processes nor has itself been contaminated in any way. Rejects
and pulps would not be re-analyzed as an analytical test for salting in the example cited because they might
already be contaminated. Rather, rejects and pulps (or heavy mineral concentrates from them) should be
examined by microscope to check for particulate, contaminant gold i. e., shavings of refined gold or grains of
naturally occurring alluvial or colluvial gold.
In the case of gold assays, optical investigation of pulps and rejects with a microscope can be a partial
approach to recognizing the presence or absence of salting in cases where either placer gold grains or refined
gold shavings have been purposefully added to samples. Both types of contaminant gold are relatively straightforeward for a mineralogist to identify. The simple procedure of testing a rough heavy mineral concentrate,
taken from samples intermittently for microscope examination, is a small price to pay for the benefit that arises
in establishing the absence of particulate, contaminant gold at an early stage in a scam.
Precious metal deposits are particularly susceptible to salting, perhaps in part because of the romantic
lore surrounding highly publicized gold rushes of the 19th century (California, Klondike, Witwatersrand, etc.)
represents a romantic picture that scam artists of today can draw on. Today scam artists are particularly
innovative in their approaches to salting. Consequently, as Rogers (1998) emphasizes, security of samples
(chain of custody) from the time of their taking until assays are received i.e., an open and clearly defined chain
of custody of samples and assays through target hardening) is the safest way of being confident that salting
has not occurred. Danger signs or red flags that alert one to the possibility of subterfuge in mineral deposit
evaluation, after Rogers (1998), are given in Table 1-1: Of course, it is impossible to see in advance all of the
creative approaches that scam artist might take, hence, vigilance with regard to the samples and sampling and
assaying procedures is essential.

TABLE 1-1: SOME DANGER SIGNS SUGGESTIVE OF SUBTERFUGE IN MINERAL


EXPLORATION PROGRAMS (after Rogers, 1998)
Personnel
(i)
overstated or misrepresented qualification of key personnel
(ii)
previous history of problems with regulatory authorities of key personnel
(iii)
exclusive control by small work groups, of sample flow from drill to assay lab
(iv)
significant changes in life style of key personnel
(v)
presence of artisanal miners or unauthorized personnel on site
Scientific/Technical

QA-QC in Mineral Exploration

(i)
(ii)
(iii)

inconsistent and unconvincing geological/deposit model


poorly maintained plans and sections, particularly in relation to recent results and ongoing work
absence of effective management supervision or outside auditing

Drilling Program
(i)
consistently poor drill core recovery in mineralized intervals
(ii)
skeletonized core or missing intervals in retained splits
(iii)
inability to replicate lithology between adjacent drill holes
(iv)
failure to maintain continuity of raw core and splits
(v)
core logging inconsistencies, particularly concerning alteration/minerlization
Sampling and Assaying
(i)
unaccounted delays in sample shipment
(ii)
identical sample numbers used more than once
(iii)
presence of chemicals, equipment and/or materials not related to work or site
(iv)
drill results inconsistent with trench or channel samples
(v)
inability of independent labs to replicate original assay results within acceptable limits
Resource Estimation
(i)
resource estimates inconsistent with other deposits of same type
(ii)
resource estimates inconsistent with other deposits in same area
(iii)
resource estimates not reproducible by independent audit
Petrographic, Mineralogical and Mineral Processing Audit
(i)
inconsistent mineral assemblages for deposit type
(ii)
mineral processing/metallurgical anomalies; unusual metal recovery characteristics

1.5 ExampleBre-X
The recent Bre-X scandal is reputably the most significant mining scam of modern times with losses
said to be of the order of Can$6 billion. In brief, the scam seems to have been perpetrated by a small group of
employees who, at an in-transit storage location, reopened bags containing samples and contaminated them
with carefully weighed amounts of placer (flour) gold. Difficulties arose in 1996 in reproducing assays, leading
to an auditing firm being employed to verify the assays. The resulting report (Farquharson et al, 1996) noted
a number of observations.considered to be indicative of potential problems with the Bre-X sample data as
follows:
1.

2.

We were surprised when hearing that all of the core from Busang was being assayed with the
exception of the 10-cm half-skeletons that were retained from each one metre of core. The
reason offered for this decision (which does not conform with normal industry practice) was
that the coarse nature of the mineralization required a large sample. However, the HQ core size
is larger than is normally used in many exploration programs and results in a smaple weight of
14 kg for every two metres, one-half of this sample, as we have used in our audit, would still
provide a sample of 7 kgmore than enough to be representative of the two metres. The basic
reason for the industry practice of retaining half the core is to be able to review the geology at
any time and to carry out check assays should there be any questions as to the accuracy of the
original assays.
The decision by Bre-X to designate some core as mineralized and other core as in-fill is
very different from what one would expect as a standard practice particularly given that the two
categories of material would follow different sampling routes. The in-fill material was treated in
the sample preparation facility at Busang to produce a pulverized pulp that would be ready for
assaying and invariably resulted in low gold values. The mineralized core was bagged,
delivered to Samarinda and subsequently taken to Balikpapan, and usually some gold values

QA-QC in Mineral Exploration

3.

4.

5.

resulted. Normal mining industry practice would be to have all the core treated exactly the same
way through the same facilities. With the very large backlog of core to be sampled at Busang
we were surprised to note that during our period at the property the sample preparation facility,
which is very well equipped, was idle and the employees assigned to that area advised that they
had no material to process.
In reviewing the Kilborn intermediate feasibility study of November 1996 and in particular the
metallurgical work carried out under the supervision of Normet Pty. Ltd. in Australia, we were
struck by the statement that more than 90 percent of the gold in the Busang metallurgical
samples could be recovered in a gravity concentrate that represented about 6 percent of the
weight of the feed. Although one could expect to see such exceptional recoveries in a gravity
circuit for material coming from an alluvial deposit, we have never before seen such a response
for material coming from a primary deposit. In a mineralogical study carried out for Normet by
Roger Townend and Associates of Perth, they identified the gold particles and made the
following comment gold particles are liberated and mostly 100 to 400 microns. Some particles
show distinct fold rich rims with argentian core, other particles are of uniform colour of
varying silver content. Gold particle shapes were mostly rounded with beaded outlines. With
the very coarse liberated gold of up to 400 microns reported in this study, one would have
expected to see visible gold somewhere in the many thousands of metres of drill core from
Busang. However, there is no mention of visible gold in any of the documentation that we have
seen with the Kilborn feasibility study or resource estimate, or in the drill logs prepared by BreX geologists, other than in the mineralogical studies done on the concentrate samples.
In a parallel metallurgical investigation carried out at Hazen Research Inc. in golden,
Colorado, and as reported on October 24, 1996 in a Hazen report included in the Kilborn
intermediate feasibility study, similar remarkable metallurgical and mineralogical characteristics
were discussed. In fact, if there has not been a typographical mistake, Hazen reported
recovering 91 percent of the gold in a gravity concentrate that was less than one percent of the
material delivered to the gravity circuit. Similarly, a mineralogical study was done by Hazen,
and they reported that the gold particles in the Busang composites were liberated as relatively
coarse nuggets and minor flakes with an average size range of 60 to 180 microns. The gold
grains are typically very compact and often nearly spherical in shape. Photographs were
included as they were with the report prepared by Roger Townend and Associates, showing
coarse liberated gold lying amongst much smaller size particles of pyrite and other gangue
material. Hazen also pointed out in their test work that there was no observable trend with
respect to gold recovery and residual tailing grade as a function of fineness of grind. In other
words, it mattered little how fine the ore was ground in the grinding circuit, the gold was
already liberated and minimal grinding was required, which is most unusual for ore originating
from a primary deposit. The deposit was therefore a metallurgists dream, but no reconciliation
was made with the fact than no visible gold had been reported in core samples. In their
mineralogical study, Hazen also commented on and provided a photomicrograph of the colour
of the Busang gold indicating that it was electrum but with a deep yellow rim suggesting
surface dissolution of some of the silver, a fact that has since been independently confirmed by
both Freeport and ourselves. Both Hazen and Normet have commented on the great difficulty in
repeating grades for assays of the same samples that were used in the metallurgical test work, a
feature that many others have comment on, and this is a reflection of the coarse, free, and
rounded nature of the gold grains in the Bre-X samples.
A very comprehensive petrographic report was prepared for Bre-X in September 1996 by
PetraScience Consultants Inc. of Vancouver, in which they reviewed the characteristics of
alteration and mineralization of 103 samples from Busang. All the samples were described in
great detail petrographically, from polished thin sections that allowed description of sulfides,
oxides, and silicate minerals. Amongst the conclusions from this report was the following
statement: Gold is assumed to occur dominantly as free grains, as no unequivocal gold was
observed in this study, either as grains or in other sulfides. It is again remarkable that in a
deposit where so much coarse gold has been observed in samples submitted for assaying that
none was observed in the 103 samples that were selected to represent the full range of rock
types and associated alteration in the Busang deposit.

QA-QC in Mineral Exploration

6.

With a deposit the size and grade reported for the Southeast Zone at Busang, and with its close
proximity to surface (based on the assays reported for many holes), one would have expected a
strong surface expression either through geochemical sampling of soils and sediments or
through sampling of outcrop. There is a geochemical anomaly over the Central Zone, which is
consistent with the mineralization reported to be there in the drilling program of 1989 and later.
We were unable to find any evidence of a geochemical survey on the Southeast Zone, but were
given a surface map showing where outcrops had been sampled with very low gold values
resulting and not what would normally be expected for a deposit of several hundred million
tonnes grading 2 to 3 g/t, with almost no stripping required to commence mining.

A number of additional problems with the Bre-X situation have been listed by others (e.g., Jones et al, 1998;
Lawrence, 1998)
7.

Three separate data bases were provided to Freeport McMorran for their due diligence of early
1997. There were discrepancies of thousands of samples between any two of these data bases.
One particular problem was the existence of 3864 sample numbers for which duplicate values
existedone set showed ore grades whereas the other contained little or no gold.
8. Extremely unusual rapid rate of growth of reported resources during exploration.
9. The magnitude of the reported resource is so far above normal as to demand thorough
verification
10. Normal industry practice is to have all core samples treated by the same facility with the same
protocol rather than the two very different subsampling/analytical paths used by Bre-X
personnel for their mineralized and fill-in (unmineralized) categories of samples that were
based on visual classification of sample material
11. Publicity regarding the deposit routinely included the 50 percent of resources that were in the
Inferred category and to which no emphasis should have been attached.
Target hardening is the strategic strengthening of high risk areas in exploration/evaluation to reduce
the risk of tampering. To effect target hardening it is imperative to clearly understand the details of all aspects
of information gathering and transfer, including collection of information, shipment of samples, sample
analysis, and reporting of assay results. This general sequence is known as the chain of custody of
samples/information and the chain should be clearly documented. Safety lies to a large extent in adherence to
a selection of so-called best practices that are widely used and recommended throughout the mining industry.
An indication of some of the more important best practices is given in Table 1-2.

QA-QC in Mineral Exploration

TABLE 1.2: A SELECTION OF BEST PRACTICES IN THE MINERAL INDUSTRY (after Rogers,
1998)

QA-QC in Mineral Exploration

1.6 General Precautions


Cornish (1966) suggests the following general rules to help prevent the possibility of salting:
1.
2.
3.
4.
5.
6.
7.

Sample only fresh exposures that you, yourself have chosen.


Try to sample over as much of the deposit as is feasible.
Keep all personnel who are not part of your sampling team away from the sample locations during the
taking and subsequent handling of the samples.
Never allow the samples out of sight unless they are sealed and in sound containers.
Have the samples assayed by a reputable assayer, preferably one not residing in the area of the deposit
under investigation. In addition, acquaint yourself with the assay procedure involvedknow its
limitations.
Take duplicate samples at 10-15% of the sampling sites.
Always be suspicious of unusually high assays. There are two checks that can be made to protect yourself
in this situationsubmit dummy samples to the assayer (samples of known quality); send duplicate
samples to different but reputable assayers.

1.7 Course Illustrations Using P-res Software


This course involves the use of many examples, all based on real analytical data that demonstrate
situations encountered in the real world. Most of what is demonstrated can be done on a spreadsheet such as
Excel, Quattro Pro and others. Graphic output is particularly useful for quantitative evaluation of much quality
control data using an MSDos-based system called P-res (Bentzen and Sinclair, 1993) which is available free of
charge for users of this course. A users manual for P-res is included in the compressed file P-rez.exe, from
which P-res is downloadable as an executable file, P-rez.exe. Simply download the file into an appropriate
directory, type P-rez followed by <enter> and the downloaded file will expand to an operational system. The
Users Manual is part of the download and can be printed if required. Examples of P-res output are described in
later sections of this course.

QA-QC in Mineral Exploration

SECTION 2
2.0 STATISTICAL PARAMETERS COMMONLY USED IN ERROR ANALYSIS
2.1 Introduction
Quality control procedures, pertaining to assay data for use in mineral deposit appraisal, are aimed, in
part, at understanding sources and magnitudes of errors in the assay values reported from an analytical
laboratory. This information is used, where necessary, to improve aspects of the sampling-subsamplinganalytical system in order to maintain errors at an acceptably low level. It is important to realize that errors in
assay data exist, however good the individuals are who provide the data. Furthermore, it is important to
appreciate that unless errors are known quantitatively it is not possible to be confident about whether or not
they are at acceptable levels or how they can best be reduced to acceptable levels. It is impossible to deal
quantitatively with errors in assay values without an understanding of some basic statistical concepts and
procedures, including measures of central tendency and dispersion, and various tests such as t- and F-tests.

2.2 Measures of Central Tendency


Common measures of the tendency of a set of values to congregate are the arithmetic average, the
median and the mode. Of these, the first two find widespread use in dealing with duplicate data used for
purposes of quality control.

Figure 2.1: Histogram of a hypothetical data set of integer values (abscissa) illustrating that mode,
median and mean can differ for a single data set that is skewed. Numbers of items in each class are
listed in each bar of the histogram.

QA-QC in Mineral Exploration

Figure 2.2: Three examples of drill-core assays presented as histograms. (A) negatively skewed,
(B)symmetric, (C) positively skewed. A smooth normal curve has been fitted to the symmetric
histogram to emphasize the similarity of histograms and data distributions.

2.2.1 Arithmetic Mean


A mean value is a measure of central tendency of data. The arithmetic mean, m, of a data set consisting
of n items (x1, x2, x3,xn) is a simple and widely used parameter in error analysis that is simply the sum of
all data items (xi) divided by the number of data items (n), as follows:
m = xi /n

(1)

The numeric examples in Table 2.1can be easily verified manually.


Some uses of averages are:
1.
2.
3.
4.

to compare round robin analyses of standards by different labs


to evaluate the average difference in sets of duplicate analyses either by the same lab or by different
labs
to characterize the most likely metal contents of standard samples
to characterize the grade of a sample that has been analyzed two or more times.

QA-QC in Mineral Exploration

10

Figure 2-3: A sequential plot of 58 analyses of a Pt standard. The lower horizontal line is the mean
value for reference. The upper horizontal line is one standard deviation above the mean. Note that
the mean value is biased high relative to the true mean because a single, obvious outlier has been
included in the estimate.

TABLE 2.1: CONSECUTIVE Pt AND Pd VALUES OBTAINED DURING NUMEROUS BATCH PGM
ANALYSES OF ROUTINE SAMPLES (Values in ppb)
In-House Std
Seq Pt Pd
1 30 140
2 41 103
3 31 124
4 116 100
5 28 102
6 33 103
7 60 103
8 29 109
9 35 86
10 33 98
11 33 106
12 38 93
13 30 91
14 26 86
15 37 100
16 57 84
17 47 134
18 32 122
19 35 113
20 27 145
21 31 117
22 32 99
23 58 104

QA-QC in Mineral Exploration

24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48

33
33
31
28
38
30
42
42
39
23
32
30
28
30
33
32
36
22
46
27
38
30
31
49
19

86
100
115
85
96
89
152
99
98
101
120
92
96
109
93
93
89
97
81
99
95
99
103
71
76

49
50
51
52
53
54
55
56
57
58

23
27
22
31
38
47
28
29
53
36

93
88
79
75
79
91
98
94
97
125

11

12

The data in Table 2.1 are consecutive Pt and Pd analyses of pulp samples of an in house standard (not
a CRM, certified reference material)each 2 to 6 consecutive analyses were included in separate analytical
batchesanalyzed by a reputable lab. The standard was constructed by mixing a number of subsamples of mill
feed and blending the composite material. Normally, supporting data would include many analyses by other
reputable labs and an analysis of the results to provide a best estimate of the true metal contents as well as
their 95% confidence limits. A plot of the Pt data is shown in Figure 2.3 where the presence of a single outlier
is evident. The statistics for 58 Pt values are: m = 35.8 ppb, s = 13.9 ppb. If the outlier is removed these values
change to: m = 34.4 ppb and s = 8.9 ppbnote that with exclusion of the outlier the mean is lowered by
100(1.4/35.8) = 3.9% whereas the standard deviation is reduced by 100(4/13.9) = 28.8%. The general precision
is then (200 x 8.9/34.4) = 51.7%, hardly the quality of reproducibility that one would hope for in a standard.
These data illustrate some of the practical problems encountered with in-house standards.
1.
2.
3.

The presence of an outlier value (xi = 116 ppb Pt) in the Pt datathe outlier must be omitted to arrive at a
best value for Pt-content of the standard (see Figure 2.3 ).
A relatively high level of variability among batch mean valuesexamine the variability of averages of
groups of 3 to 6 values.
A relatively high level of variability of repeat values within individual batchesexamine the internal
variability of local groups of 3 to 6 values.

Explanations for the high variability of analytical results are difficult without further tests but possibilities
include:
1. Inappropriate analytical procedure.
2. Poor procedures for homogenizing the pulped material of the standard. Special attention must be directed
to homogenizing the pulp material that comprises the standard.
3. Poor choice of material for a standard, perhaps mineralogically complex and difficult to homogenize
because of the inability to redistribute trace amounts of some of the valuable minerals uniformly
throughout the standard. This is a very likely cause of the poor precision in these data.
In addition, without supporting data from other independent labs, these data only provide a measure of
precision but not accuracy!
The data of Table 2.2 are a summary of average metal contents based on 24 repeat analyses of a standard
used to monitor internal lab reproducibility. The data by themselves do not provide a measure of accuracy
because the standard has not been treated as a CRM (certified reference material) so the metal contents have not
been established adequately. However, the data are typical and represent an example of the level of
reproducibility (precision) that can be expected for routine multielement analyses (low levels of concentration)
by reputable commercial labs. Note the very large differences in precision for various metals in the same
standard, a situation that is in part attributable to different abundance levels and in part to variable difficulty in
analyzing different metals with a single analytical procedure.
TABLE 2.2: AVERAGE GRADES OF 24 REPEAT ANALYSES* OF A SINGLE IN-HOUSE
STANDARD BY A REPUTABLE COMMERCIAL LABORATORY
Element
Average
Std. Dev.
Rel. Err.
Precisionand units
200(s/m)
Pd_ppb
3882
150.2
0.039
7.74%
Pt_ppb
221.5
17.03
0.076
15.4%
Cu_ppm
13390
601.5
0.045
9.0%
Au_ppb
87.3
15.7
0.180
35.9%
Ni_ppm
408
39.2
0.096
19.2%
Co_ppm
57.1
2.29
0.04
8.0%
* Standards were inserted with various subsets of samples by the client and were not recognizable
as standards by the lab.

QA-QC in Mineral Exploration

12

13

The value m in equation 1 is also spoken of as the expected value of the data. Commonly, estimation of the
average using equation 1 provides an acceptable estimate of m although difficulties can arise in certain cases.
For example, the presence of an outlier in the data can bias the estimation of m in some situations;
consequently, data should be examined for outliers and they should be removed if they impose bias on the
estimate. Consider the case of repeated analyses of a standard by several labs for which one lab has a single
value that is a factor-of-three times the average of all labs. Inclusion of that single value in estimating the
average metal content of the standard will clearly bias the value to be applied to the standard. The data of Table
2.1 show the large difference in estimated average value with and without inclusion of the outlier (116 ppb Pt).
The average, excluding the outlier compares well with averages obtained from other laboratories that formed
part of a round robin analytical program to establish an accepted value for the standard. The average that
includes the outlier is clearly out of line relative to data from all the other laboratories.
A second concern in evaluating the average value using equation 1 arises where there are very few
data. The arithmetic average based on only a few data is highly susceptible to a single high (not necessarily an
outlier) value. In such cases, where very few data are available, the median value is a better measure of central
tendency.

Figure 2.4: Histogram of differences of paired analyses of gold (Au1 Au2) for pulps and rejects by
two different labs in 1983. The mean difference (vertical line through central class interval) is offcentre mainly because of a single value on the far right that destroys the symmetry of the histogram.
An important use of histograms is illustrated in Figure 2.4. In this example, duplicate analyses exist, an original
by the mine lab and a reject analysis by an independent lab. Generally, the purpose of analyses by an
independent lab is to monitor bias and duplicate pulp analyses are appropriate. The use of reject analyses for
this purpose introduces unnecessary errors (hence, wide spread of values about the mean) and clouds the
quantitative interpretation of the data.

QA-QC in Mineral Exploration

13

14

2.2.2 Median
To find the median of a data set, arrange the data in either increasing or decreasing order, locate either
the mid value (odd number of data) or the two mid values (even number of data) and accept the corresponding
value or average of two, as the median. Median values rather than averages are an important part of the
Thompson-Howarth error estimation procedure to be described in a later section. For strongly skewed
distributions that are sampled by relatively few values, the median provides a more acceptable estimate of the
mean than does the arithmetic average.

2.3 Measures of Dispersion (Spread) of Values


2.3.1 Introduction
Dispersion of values in a data set can be measured by the range (max. value minus min. value) and the
variance (or standard deviation). For purposes of quality control, the range has little practical application
because it is susceptible to great variation by the inclusion/exclusion of a single value, particularly if the data
set contains relatively few values. In contrast, the variance is a very fundamental value in statistics and hence of
great import in a quantitative approach to quality control.

2.3.2 Variance
The variance of a population is the mean squared difference from the average and is estimated from
a data set as follows:
s2= (xi m)2/(n 1)

(2)

where s2 is the variance, xi represents the n data items for i = 1 to n, and m is the arithmetic average of the data.
From the definition it is evident that the variance is a measure of average spread about the average value, albeit
in terms of squared units. The variance is a very fundamental measure of spread (dispersion) of data and is
widely used to characterize average error in a data set or a subset of the data. For example, equation 2 can be
applied to the data of Table 1.1 (excluding the outlier) and the spread of values about the average, expressed as
a variance, is calculated to be 72.2ppb2 Pt.
The variance has the particularly useful property that it is the sum of the variances that arise from
different sources. As an example, consider the total error that arises in comparing the analyses of first halfcore samples with the analyses of the other (second) half cores (i.e., the practical situation in which half cores
are the original sample and the remaining half cores are the check field samples). In this case the total error as a
variance, st2, is given by (see Burn, 1981; Kratochvil and Taylor, 1981)
st2 = sa2+ sss2+ ss2

(3)

where sa2 is the analytical error, sss2 is the subsampling error and ss2 is the sampling (geologic) variability. With
appropriate duplicate data, each of these sources of variability can be estimated and, if necessary, consideration
can then be given as to how best to improve the quality of the data.
Because it is only as squared values that errors are additive, variances are a sensible way of
considering the relative magnitudes of analytical, subsampling and sampling variabilities. Several examples are
shown in Table 3.

QA-QC in Mineral Exploration

14

15

TABLE 2.3 PERCENTAGE DISTRIBUTION OF ERRORS FOR PRECIOUS METAL DEPOSITS


Source
Toronto Stock Exch.
.
Gold deposit

Relative Errors (%)


sa2
sss2
ss2
5
15
80

Comment

37

55

Average approximation attributed to


gold deposits
Precambrian porphyry gold

Platinum deposit

12.8
44.7

0*
0*

87.2
55.3

Bushveld platinum
Bushveld palladium

High Nugget gold

72.0

20.7

7.3

Smee and Stanley, 2005

Epithermal gold
96.7
3.1
0.2
Smee and Stanley, 2005
* Subsampling error is zero because entire sample was pulped.
The figures of Table 2.3 are source error percentages of the total errorall errors as variances. It is
important to appreciate the significance of squared aspect of errors expressed as variances. Consider a gold ore
averaging 2.5 g/t with an average total error as a variance of 4.4 (g/t)2. Assume that the disposition of errors is
as in the Toronto Stock Exchange example of Table 2.3. Thus, the absolute value of analytical error variance is
0.05 x 4.4 = 0.22 (g/t)2, the absolute value of subsampling error variance is 0.15 x 4.4 = 0.66 (g/t)2, and the
absolute value of sampling error variance is 0.8 x 4.4 = 3.52 (g/t)2. It is more comprehensible to consider these
absolute errors as the square root of the variances so that the errors are in the same units as the assay data.
These square roots are respectively, analytical error = 0.47 g/t, subsampling error = 0.81 g/t and sampling error
= 1.88 g/t. Clearly, no matter how much time, money and effort is directed toward reducing the analytical error,
the overall error will remain high.
In evaluating the significance of the information in Table 2.3 bear in mind that sa2 , although defined
as analytical error, involves the random error inherent in subsampling a pulp, generally one of the easiest tasks
to perform of the sampling protocol. Hence, in many cases it will be difficult to effect large improvements in
quality of data simply through improvements to sampling and analyzing of the pulps. In the example cited in
the preceding paragraph, a 50% decease in the analytical error has a very small impact on the total error
because of the much larger errors implicit in subsampling and sampling. It is surprising, the amount of effort
that has been directed in the past to improving the quality of pulp analyses considering the fact that about 70 to
90 percent of the variability in quality generally arises from other sources.
Large subsampling errors can arise easily because of non-optimal sample reduction procedures. The
use of Gys sampling equation (Appendix ) and appropriate care taken to homogenize samples prior to each
mass reduction stage can result in a subsampling protocol that leads to a level of subsampling error as low as is
practicable. There are practical situations where the original sample is small enough that the entire sample can
be ground to a pulp, effectively negating subsampling as a source of error. Of course, this procedure results in
larger amounts of pulp than would otherwise be obtained, resulting in a larger analytical error than for a smaller
amount of pulp. This is because the error in sampling the pulp for analysis is traditionally included in what is
reported as the analytical error.
The sampling error is commonly the major source of variability in duplicate sampling, as for example,
in the comparison of assays for pairs of half cores or the comparison of assays of duplicate samples taken from
piles of blasthole cuttings. In contrast to analytical and subsampling variability which are errors, the sampling
variability is largely a real geologic variability, that is, a real difference in metal abundances over very short
distances in the original rock, a major contributor to the so-called nugget effect of the geostatistician. The most
effective way of reducing the impact of geologic variability is to increase sample mass, perhaps by increasing
the length of drill core that comprises a sample and/or increasing the diameter of the drill core that is to be
sampled. Some variability between paired half core analyses can arise in manual core splitting because the two

QA-QC in Mineral Exploration

15

16
half cores can differ substantially in mass. This source of inconsistency can be overcome in practice by
diamond sawing half cores.

2.3.3 Standard Deviation


The standard deviation, s, is the positive square root of the variance, that is
s = (s2)1/2

(4)

Thus, the standard deviation, like the variance, is a measure of the spread of data about the average value. A
particular advantage of the standard deviation is that it is in the same units as the data, rather than squared units
as in the case of the variance.

Figure 2.5: Two data distributions with common mean value but relatively wide and narrow dispersion
(spreads) of values.
There are several important uses for the standard deviation in dealing with duplicate data. The first
arises from the fact that many of the data distributions (cf. histograms) of concern in error analysis are normal
(Gaussian) in form, that is, they have the form of a bell-shaped curve. In such cases, the standard deviation is
the particular instrument by which probability ranges can be defined, i.e., the proportion of data that lie between
any two limits, or, the probability that a new or random draw from the population will lie within those limits.
For example, in a normal distribution, about 95 percent of the data are within the range m + 2*s. Similarly, 68
percent of the data are in the range m + s. Tables are available in many statistical texts that allow the calculation
of the proportion of samples between any two selected values of the variable. The procedure is illustrated in
Appendix 1.

2.3.4 Standard Error of the Mean


A second important use of the standard deviation is in the estimation of the error on the mean, that is,
the standard error of the mean, given by
se = s/n

(5)

In this case, n samples have been used to determine the mean value. That mean value plus or minus two
standard errors of the mean will contain 95% of new mean values calculated from new samples of size n! It is
evident from the equation for the standard error of the mean that se will in all cases be less than s and that as n
increases, se can be very much smaller than s, even orders of magnitude smaller. Consequently, where
dispersion limits are attached to a standard sample it is essential to be clear as to the nature of the dispersion
being quoted. First, is standard deviation or the standard error of the mean the basis of the quoted dispersion,
second, does the reported dispersion represent 1 or 2 standard deviations (or standard errors)?

QA-QC in Mineral Exploration

16

17

The standard error of the mean (or a close equivalent) is the basis of many comparative statistical tests,
including so-called t-tests, that will be illustrated in a later section of this course.
An example of the use of the standard error of the mean is in placing confidence limits on estimates of mean
values (e.g., Kratochvil and Taylor, 1981)
= m + tse/n 1/2

(6)

where : is the population mean, m is the mean of the sample, se is the standard error, n is the number
of values in the data set and t is a value obtained from tables of the t-distribution in statistical texts. If n > 30
then t = 1.96 (commonly approximated as 2) for 95% confidence limits.
Metal contents of standards are never known exactly. They are generally assigned an average value, v,
estimated from the group of analyses deemed to be acceptable (see later section of course), with a 95% error
band that is based on the standard error of the mean, i.e., v + 2*se. This error limit on the quoted value of the
standard is generally relatively small because many analyses by many labs have been used in its estimation
(recall that se decreases as the number of analyses on which it is based increases). Replicate analyses by a
single lab will spread well beyond these error limits.

2.4 Coefficient of Variation


The coefficient of variation (CV) is the standard deviation, s, divided by the mean value, m, as a
percentage, that is
CV = 100*s/m

(7)

Where the standard deviation represents an error, the ratio s/m is referred to widely as the relative error, er, and
is commonly quoted as a proportion.
er = s/m

(8)

The CV is widely used in the mining industry as a general way to identify symmetry or asymmetry in the
distribution (histogram) of values. Generally, if CVs are in the range 0 to 20%, values are symmetrically
distributed about the mean value. As the value of the CV increases beyond 100%, to as much as 300% or 400%
in extreme cases, the distribution becomes more and more asymmetric or skewed toward high values.
There are many practical cases where the relative error interpretation of s/m is appropriate, such as, (1)
replicate analyses of a standard, and (2) duplicate analyses that span a limited range of concentrations. In such
cases, the relative error can be multiplied by 200 to provide a common estimate of precision as a percentage.
Note that precision is widely quoted as the 95% confidence range of analyses, about the expected value and,
traditionally, is reported as a percentage. Examples are illustrated in Table 2.2 where the means and standard
deviations of 6 metals are reported. These values have been used to calculate both relative error and precision
for each of the elements. Note in particular the wide range in precisions for the various metals in the same
standard, from approximately 8% to about 36%. This variation is in part a function of the difficulty in analyzing
low abundances relative to high abundances of a metal, differences in mineralogies of metals and also the
inherent differences in analyzing different metals by a particular analytical method. The mere fact that detection
limits are likely to be different for different metals implies that analytical precision will also be different.

QA-QC in Mineral Exploration

17

18

2.5 The Simple Linear Model


In the general assessment of paired analytical data, histograms serve as a simple means of comparing
the two sets of measurements that comprise the paired data. In addition, they are useful in demonstrating
symmetry (or asymmetry) of a distribution, which can lead to recognition of outliers. In the comparison of
paired analytical data there generally is no a priori reason to expect an asymmetric distribution of differences.
Consequently, if asymmetry exists in a histogram of differences of paired data a problem is indicated in one of
the two members of the pair. Suppose, for example, there is a remarkably long tail of high positive values in the
histogram of differences where a difference is determined as a lab A value minus a Lab B value. This indicates
that there is a tendency for Lab A to overestimate relative to Lab B, although without additional information it
is not possible to say which lab is correct. Moreover, such a bias may be restricted to a particular grade range.

2.5.1 Introduction
The simple linear model is simply the equation of a straight line. Consider a graph of x1 vs x2, where
x1 represents original analyses of many pulps and x2 represents a second set of analyses of the same pulps; in
other words, from each pulp we have an original and a second value. If the pulp duplicate analyses are plotted
on an x-y graph (scatter plot) we expect that if no bias exists between the two sets of analyses they will plot
scattered about the line y = x. Of course, there might be a component of bias and the scatter about the y = x line
can be either very large or minimal, for example, Figures 2.6 and 2.7 in which duplicate analyses plot as a
linear trend with a very large scatter to which a best fit line has been fitted.
In fitting models that quantify a systematic relation between two variables, such as a set of duplicate
analyses, it is common to (i) determine or assume the form of the mathematical relation between the two
variables, and then (ii) adopt a method to calculate the model parameters specific to a data set. A wide range of
mathematical models are available and even with a single model type (e.g., linear) there are various choices to
be made in the calculation procedures available. For example, one might adopt a linear model to describe a
relationship between duplicate analyses. However, a number of very different linear models could arise
depending on the many different calculation methods and their implicit and explicit assumptions. An incorrect
choice of calculation method can lead to an inappropriate model from which incorrect statistical inference can
result. Paired analytical data are not immune from this problem.
Regression techniques incorporating a linear model are commonly used for the comparison of one set
of analyses (ys) with another (xs), as described by Till (1973), Ripley and Thompson (1987), Sinclair and
Bentzen (1998), and many others. The principal justification for use of a linear model is the expectation that,
with no bias, a set of duplicate analyses will be equivalent except for a component of random error. Hence, the
data are expected to cluster along and about the line y = x on a graph of y versus x. If the random difference is
small the spread away from the y = x line will be small; if the random differences are large the spread will be
large. Where there is a significant bias between the two sets, some, even most, of the plotted values will not be
centered on the y = x line. Instead, the data may be centered, all or in part, on another line of the form
y = bo + b1x + e
where

bo
b1
e

(8)

is the intercept of the line with the y axis (i.e., the value of y if x = 0),
is the slope of the line (for any two points on the line the slope is the ratio of the
difference in y values over the difference in x values for any two
points on the line),and
is a measure of spread of the data about the line (a standard deviation).

m, bo and b1 are called parameters of the linear model; for a specific model, m, bo and b1 are constants. Once m,
bo and b1 are known, the linear equation can be solved by substituting any value of x and calculating the
corresponding value of y.

QA-QC in Mineral Exploration

18

19

If y values are distributed normally about the line for any value of x, and m bo and b1 are estimated by
a procedure known as least squares, then m and bo and b1 are also normally distributed (Miller and Kahn,
1962). The advantage of these parameters being normally distributed is that they can be used to make statistical
tests, specifically, whether or not bias is recognizable in the data. In fact, because of the central limit theorem
these statistical tests can be made even if the underlying distributions are not normal, providing the amount of
data is large. As the amount of data increases (n > 40) and the data distribution becomes more symmetric, the
mean values of these parameters tend toward a normal distribution, regardless of the nature of the actual data
distribution.
If statistical tests form part of the evaluation of the significance of a linear model it is important that
the paired data cover an appropriate range of concentrations; otherwise conclusions are not generally
applicable.

2.5.2 Assumptions Inherent in a Linear Model Determined by Least Squares


1. One of the variables, y, is normally distributed about the trend that defines the relation with the second
variable, x.
2.

The distribution of y values has the same spread regardless of the value of x.

3.

The form of the model commonly is assumed to be a straight line.

Consider each assumption separately.


1. y is normally distributed for every value of x.
This assumption is not necessarily met other than coincidentally. The most obvious reason that data
fail this assumption is that data are not represented uniformly throughout the range of paired data values
combined with the fact that error can vary significantly with range. Consequently, the distribution of y-values
can be non-normal and should be tested. Moreover, as discussed below, assumption 2 rarely prevails
2. y has the same spread regardless of the value of x
This assumption is rarely met in a large set of duplicate gold analyses. One reason is that more than
one style of mineralization is represented in the data. For example, it is evident that a nugget style (e.g., visible
gold) of mineralization will have very different precision than will lower grade, more evenly dispersed
mineralization (e.g, fine-grained, disseminated, non-visible gold). Apart from these geological considerations
there is the well-established fact the average error increases as the concentration increases (e.g., Thompson and
Howarth, 1973; Francois-Bongarcon, 1998). Consequently, it might be necessary to subdivide the paired data
set into two or more subsets, each of which is more closely in agreement with the assumption.
3. Linear model
There are various sources that can complicate the assumption of a single, simple linear model. For
example, sampling and/or analytical procedures can be more appropriate for one range of data and less
appropriate for another range, as in the case of low grade disseminated mineralization versus high grade, nugget
type mineralization. Consequently, the nature of errors can be very different for the two styles of
mineralization, leading to two very different linear models to describe the two error patterns. Because of the
generally wide scatter of paired data on x-y graphs the two linear models might well appear to be a single
continuous model, perhaps incorrectly interpreted to be curved continuously. Moreover, errors in one
laboratory can be very different from those in another because of different procedures and/or methods
employed in the two labs. Complexity in the model should be expected and looked for although generally there

QA-QC in Mineral Exploration

19

20
is no basis on which to select continuously curved models rather than two or more linear models for different
grade ranges.

2.5.3 A Practical Linear Model


In general, the linear model is applied to a comparison of duplicate analyses (paired analytical data) of
samples. Commonly, an early set of analyses is being compared with a later set of analyses and the two are
expected to give more-or-less equivalent values, on, average, unless analytical and/or sampling problems exist
for one or both sets of data. For this reason the relation between the two sets of analyses is expected to be a
straight line with some scatter of data about the line because of ever-present random error. The paired data
might be generated in a variety of ways including:
1.
2.
3.
4.

repeat analyses of pulps (or rejects) by the same laboratory


repeat analyses by a second analytical method, by the same or another laboratory
repeat analyses by two analytical methods by one or two laboratories.
analyses of two sets of samples representing the same phenomenon.

In each of these cases there is a general expectation that the two sets of analyses will be identical, on average,
providing that no bias exists in any of the analyses. Reality commonly does not attain this ideal situation. Both
sampling and analytical procedures can lead to very different error patterns for different subsets of the total
data. Consequently, the subjective process of subsetting the data might be necessary. There are two additional
reasons why subsetting of data might be necessary:
1.
2.

the presence of outliers in the data, and


the presence of influential samples in the data.

Outliers are those values that differ very greatly from the vast majority of data As a rule outliers are
fairly straight forward to recognize although in some cases their recognition is subjective. Influential values
represent a small proportion of data that do not class as outliers but which have a very strong influence on the
particular model calculated for a data set. As an example, consider a data set consisting of 100 paired gold
analyses 95 of which are about evenly scattered in the range 0 to 4 g Au/t. The remaining 5 values spread
between 6 and 10 g/t. The 5 high values might dominate the linear model to the point that the model is not
representative of the other 95 values. Clearly, in such a case it is wise to remove the 5 influential values and
calculate a model for the 95 values. The 5 values might or might not be described adequately by the model
based on 95 values; if not, they must be considered separately.

2.5.4 Choice of an Estimation Method.


There are four general approaches that have been used to fit linear models to paired analytical data
1.
2.
3.
4.

weighted least squares


principal axis
major axis, and
reduced major axis.

In all 4 cases a general method known as least squares is used to determine the linear model and the result is
referred to as a best-fit model. In each case the term best fit means that a particular error criterion is
minimized relative to the linear model that is determined. Not all of these criteria are appropriate for comparing
replicate assay data.
The most widely available method of fitting a line to a set of paired data, traditional least squares, is
an example of an inappropriate least squares procedure that, in some cases, has been incorrectly applied to the
description of paired assay data. The reason that traditional least squares is inappropriate for such data is that

QA-QC in Mineral Exploration

20

21
the method assumes that one of the variables (x) is perfectly known and places all the error in the second
variable (y). In reality, there are errors in both of the variables being compared and this must be taken into
account in defining the linear model. The problem with traditional least squares is well illustrated by an
example, gold assays of duplicate samples of blastholes from the Silbak Premier gold mine, illustrated in Figure
2.6 (from Sinclair and Bentzen, 1998). Two different lines are obtained, depending on which variable is taken
as y, the dependent variable. If we were to incorrectly accept these lines and test one variable statistically
against the other we would arrive at two opposing conclusions for the two lines, both conclusions being
incorrect. For example, if AUD is taken as y we would conclude that bias exists and that AUD underestimates
AU by about 27%; if AU is taken as y we would conclude that bias exists and that AU underestimates AUD by
about 29%. These two results are dramatically in conflict and clearly show that the traditional least squares
method is generally inappropriate as a means of defining a best fit linear model for paired assay data.
A least squares procedure is required that produces a fair representation of the underlying trend in a set
of paired data. This is achieved best by using a method that takes into account the different errors that exist in
the two sets of data being compared (e.g. Mark and Church, 1974). Because errors are rarely known in detail in
the comparison of many kinds of variables, including assay data, several practical approaches have been offered
to producing least squares models so that relationships between paired variables are fairly determined.
Weighted least squares procedures (e.g., Ripley and Thompson, 1987) can be highly subjective
because of the manner by which the weights are determined. In some cases they provide linear models that lie
outside the limits defined by the two traditional least squares procedures. Consequently, weighted least squares
methods are not generally acceptable to give an unbiased treatment of paired assay data.
The major axis solution is based on minimizing the squared perpendicular distances from each point to
the line. This is equivalent to minimizing simultaneously in both the x and y directions. This procedure is
affected by differences in scale between the two variables being compared, not normally a problem in dealing
with paired quality control data.
The Reduced Major Axis (RMA) linear model combines a standardization of the two variables (i.e.,
divide each value by the standard deviation of the data) and a Major Axis least squares solution to determine
the linear model. This procedure avoids any concern of difference in scale of the two variables (e.g., where
large biases exist between the paired variables). Dent (1937) showed that for paired variables the maximum
likelihood estimator of the ratio of errors, where the errors are unknown, is (sy/sx)2, which is equivalent to an
RMA line through the data. In general, at the outset of a study, errors are unknown for paired analytical data.
A reduced major axis (RMA) regression is desirable where it is important that errors in both variables
be taken into account in establishing the relation between two variables (Sinclair and Bentzen, 1998). The
methodology for reduced major axis regression has been described in an earth science context by Agterberg
(1974), Till (1974), Miller and Kahn (1962) and Davis (1986). Till (1974) emphasizes the importance of using
RMA in comparing paired (duplicate) analytical data.
The general form of the RMA line is:
_
_
y = bo + b1x + e

(9)

where x and y are the duplicate analyses, bo is the y-axis intercept by the RMA linear model and b1is the slope
of the model. For a set of paired data, b1 is estimated as
b1 = sy/sx

) (10

where sx and sy are the standard deviations of variables x and y, respectively, and bo is estimated from
_
_
bo = y - b1x

QA-QC in Mineral Exploration

(11)

21

22

_
_
where y and x are the mean values of y and x, respectively. Commonly we are interested in whether or not the
line passes through the origin because, if not, there is clearly a fixed bias of some kind. The standard error on
the y-axis intercept, so, is given by
so = sy{([1-r]/n)(2 + [x/sx] 2[1+r])}1/2
where r is the correlation coefficient between x and y. The standard error on the slope is
ss1 = (sy / sx )([1 - r2]/n)1/2
The dispersion Sd about the reduced major axis is
Sd = {2(1-r)(sx2 + sy2)}1/2
These errors can be taken as normally distributed (cf. Miller and Kahn, 1962) and can be used to test
whether the intercept error range includes zero (in which case the intercept cannot be distinguished from zero)
and the slope error range includes one (in which case the slope cannot be distinguished from one). The
dispersion about the RMA line can be used in several practical comparisons including (1) the comparison of
replicates of several standards by one laboratory with replicates of the same standards by another laboratory;
and (2) the comparison of inter- and intra-laboratory paired analyses for routine data spanning a wide range of
values..

2.5.5 ExampleSilbak Premier Gold Deposit


An example is provided in Figures 2.6 and 2.7 where Au analyses of duplicate sample pairs are plotted
for a set of quality-monitoring data for blasthole assays from the Silbak Premier gold deposit, Stewart, British
Columbia (see Sinclair and Bentzen, 1998; Sinclair and Blackwell, 2002). In this example, the two traditional
least squares models (error attributed entirely to one variable) contrast strongly with the centrally positioned
RMA model. Parameters of the variables and the three illustrated linear models are summarized in Table . In
this case assume that the RMA model is appropriate for interpreting the significance of the data. The intercept
of 0.173 has an error s = +0.323 so cannot be distinguished from zero. The slope is 0.95 with an error of 0.057
so cannot be distinguished from 1.0 statistically and we are led to conclude that there is no bias between the two
sets of data.. The large dispersion of the data is quantified, on average, by the error term for the linear model as
Sd = +4.12 g/t, a remarkably large value that leads to great concern about the blasthole sampling procedure.
Blasthole samples are unbiased but have a very large random error. Even when the data are subdivided
arbitrarily into lower grade and higher grade subsets the average errors for both subsets are extremely high (see
Figure sinben-1.bmp and Sinclair and Bentzen, 1998, for details). Note that such subsetting provides a better
description of the paired data because examination of the scatter plot for the entire data set clearly shows that
the higher grade values are substantially more dispersed than are the lower grade values. If a particular
threshold is used for subsetting it is safest to apply the threshold to the average of each pair.
Close inspection of the diagram reveals that the dispersion of data is not uniform throughout the range
of the data. Specifically, for lower grade data the dispersion about the line is much less than for higher grades.
Consequently, formal statistical tests, as done above for illustration, are not appropriate; the data must be
subsetted (subjectively) and the linear model for each subset considered individually.

QA-QC in Mineral Exploration

22

23

Figure 2.6: Scatter plot of 122 duplicate blasthole samples analyzed for gold, Silbak Premier gold
mine, Stewart, British Columbia. Reduced major axis (RMA) and two traditional least squares linear
models are illustrated. Note the limited scatter of low values and the much greater scatter of high
values, suggesting the presence of two data subsets that should be investigated separately.

Figure 2.7: The data of Figure 2.5 subdivided arbitrarily into low and highgrade subgroups. RMA
models have been fitted to each group as well as biased linear models of traditional least squares.

QA-QC in Mineral Exploration

23

24

TABLE 2-1: PARAMETERS OF VARIOUS LINEAR MODELS SHOWN ON


FIGURE 2-6
Model

y-variable

intercept
and error1

slope
and error1

dispersion
about line2

RMA3

AUD

-0.173(0.323)

0.952(0.057)

4.12

TLS3

AUD

0.677(0.035)

0.708(0.191)

2.66

AU
1.028(0.192)
0.782(0.038)
2.80
Values in brackets are one standard error
2
One standard deviation (corrected from Sinclair and Bentzen, 1998)
3
RMA = reduced major axis (error in both variables); TLS = traditional least squares (error entirely in
y-variable).
1

2.6 Displaying Datathe Histogram


Histograms are a widely-used graphical method for viewing assay data that are useful in showing
1. the general form of a grade distribution (including the central tendency of the data),
2. providing an indication of the proportions of data at various abundance levels,
3. identifying the presence of multiple populations, particularly outliers, and
4. for visually assessing whether or not transformations might be required, if statistical tests are to be
conducted using the data (i.e., if a skewed distribution exists and a normal distribution is required, a
transformation is necessary)..
Wherever used, histograms should be accompanied by a listing of the statistical parameters of the data
they represent (i.e., number of data, average value, standard deviation, etc.) as well as the class interval used to
construct the histogram.
In the general assessment of paired analytical data, histograms serve as a simple means of comparing
the two sets of measurements that comprise the paired data. In addition, they are useful in demonstrating
symmetry (or asymmetry) of a distribution, which can lead to recognition of outliers. In the comparison of
paired analytical data there generally is no a priori reason to expect an asymmetric distribution of differences.
Consequently, if asymmetry exists in a histogram of differences of paired data a problem is indicated in one of
the two members of the pair. Suppose, for example, there is a remarkably long tail of high positive values in the
histogram of differences where a difference is determined as a lab A value minus a Lab B value. This indicates
that there is a tendency for Lab A to overestimate relative to Lab B, although without additional information it
is not possible to say which lab is correct. Moreover, such a bias may be restricted to a particular grade range.
Figure 2.8 is an example of a histogram of analytical differences for paired Au analyses of 412 samples
analyzed by labs A and B. The distribution of differences is seen to be symmetric and bell-shaped. In this
example, the class interval (C.I. = 302.5 ppb) of one-quarter standard deviation has been centered on the mean
(m = 231 ppb Au).

QA-QC in Mineral Exploration

24

25

Figure 2.8: Histogram of 412 differences in paired gold assays by two different labs (Lab A and Lab
B). The average difference is 231 ppb Au and the standard deviation is 1210.
A second example for paired analyses of duplicate blasthole samples from the Silbak Premier gold
mine, Stewart, British Columbia, is shown in Figure 2.9. These same data are used in the next subsection to
illustrate the use of scatter diagrams.

Figure 2.9: Histogram of differences in gold analyses of two independent samplings of piles of
blasthole cuttings, Silbak Premier gold mine, British Columbia.

QA-QC in Mineral Exploration

25

26

2.7 Displaying DataScatter (xy) Diagrams


In quality control work we commonly wish to examine a possible relation between two variables. Very
commonly there is a hope or expectations that the two variables will be the same on average, as is the case
where the same set of samples is analyzed by two different labs. When comparing such data on a scatter
diagram it is important to ensure that both the abscissa and ordinate axes are to the same scale. Otherwise, the
underlying relation between the two sets of analyses can be obscured to visual recognition. An examples of
such a scatter diagram is shown in Figure 2.10.

Figure 2.10: A simple linear model to describe errors in paired data as a function of composition.
Scales for x and y are not quite, but should be, equal interval. Data are duplicate half core samples
for a molybdenum deposit and show wide scatter about the y = x line (lower line making the data
difficult to interpret visually). The upper line is a best fit line (reduced major axis) with a slope of 1.2,
indicating, on average, a 20 percent bias between the two sets of results.
In some cases, two fundamentally different variables are displayed on a scatter diagram in order to
establish whether or not some form of correlation exists between them. For example, it may be that bulk density
of an ore is closely related to grade. If such a relation can be established it might be possible to estimate bulk
density from the grade rather than the more costly approach of physical measurements, although some
monitoring of the relation would be required.

2.7.1 xy Plots Using P-res


P-res was designed as a tool for interpretation rather than a refined tool for illustrating reports. Hence,
while the graphics are not particularly esthetic, the output contains all the information that is essential to the
quantitative interpretation of paired, quality control data. We have seen previously, examples of histogam
output from P-res. The software is particularly useful in examining and interpreting paired data represented on
scatter (x vs y) diagrams. An example is shown in Figure 2.11. This diagram illustrates a simple scatter plot of

QA-QC in Mineral Exploration

26

27
some of the results of a quality control program of a molybdenum prospect, in this case duplicate half core
samples of about 5 feet in length. Graphic and numeric output are explained in the figure. The numeric output
for individual variables is well established in practical usage in the mineral industry. The numeric output for the
linear model (in this case a Reduced Major Axis, RMA, model) are particularly important because the various
parameters allow statistical testing for the presence of bias and the quantitative determination of both random
error and bias, as will become evident later in the course.

Figure 2.11:Paired analyses of duplicate half core samples (two half cores of the same
hole interval). This is typical of P-res output; most of the features are labeled. The lower
left corner contains statistics of the ordinate (left) and the abscissa (lower) as well as the
correlation coefficient between the two variables. Above these statistics are the name of
the ordinate and a multiplier required to provide an appropriate scale for the values on
the diagram. The figures beneath the diagram are the parameters and errors of the
reduced major axis line (RMA). The figure itself shows the individual data points plotted,
the y = x line and the RMA line.

QA-QC in Mineral Exploration

27

28

SECTION 3
3.0 STATISTICAL TESTS COMMONLY USED IN TREATING DUPLICATE AND
REPLICATE ANALYSES
3.1 Introduction
As an introduction to the various statistical tests that are important in dealing with paired assay data it
is useful to have an appreciation of the concept of probability in the context of statistics. Consider a situation
where a population has been sampled (e.g., 20,000 samples from a well-explored mineral deposit or 309
samples from a prospect) and a histogram of the resulting assays has been prepared. Commonly, the histogram
can be approximated closely by the well-known normal (bell-shaped) curve known as a probability density
function shown for the molybdenum data in Figure 2.2. In some other cases, the data can be transformed to
approximate a normal distribution. Normal distributions have the following general equation
y = [(2) -0.5 s-1]exp[-(xi m)2/2s2]

(3-1)

where y is any value of the variable,


m is the mean value of the variable, and
s is the standard deviation of the variable.
A simple transformation of x-values of the variable, z = (xi m)/s, reduces the normal equation to the
standard normal equation (Figure SB-14.bmp)
y = (2) -0.5 exp[-z2/2]

(3-2)

Figure 3.1: Standard normal distribution. Zvalues along the abscissa are numbers of
standard deviations to the left (-ve) or right
(+ive) of the mean (0). Percentage figures
indicate the percentage of error under the
curve between the two indicated z-values. All
normal distributions can be cast as a standard
normal distribution by subtracting the mean
from each value and dividing by the standard
deviation of the distribution. Consequently, the
standard normal distribution has a mean of
zero and a standard deviation (and variance)
of 1.0.

Fortunately, neither of these formulae need be used directly in quality control work. All normal distributions
can be reduced to this same standard normal equation; hence, this equation is a useful basis by which to
summarize the proportion of values that occur anywhere within a distribution. For example, the centrallylocated range, (m s) to (m + s), equivalent to the range z =-1 to z = +1, contains 68.26% of a normal
distribution. The remaining 31.74% of the values are located equally in the lower and higher tails of the
distribution, limited respectively by the values (m s) and (m + s), i.e., z = -1 and z = +1. Similarly the mean

QA-QC in Mineral Exploration

28

29
plus and minus 2 standard deviations encompasses 95.45% of the area under the normal curve, equivalent to
95.45% of the values that make up the data. In fact, the proportion of area under the normal curve between
minus infinity (- ) and any value of z can be found in tabulations in most statistical texts. The difference
between two such estimates for z1 and z2 is the proportion of area between z1 and z2 and is equivalent to the
probability that a random value from the population will fall between z1 and z2. (Figure 3.2). Statistical tests
described in this course make use of this concept of probability, i.e., the percentage likelihood that a value
calculated from a data set (with a known form to the distribution) lies within a certain range that is defined by
the level at which a test is run.
The simple statistical tests to be considered here are examples of a large group of so-called tests of
significance. In general, such tests involve assuming a particular hypothesis (commonly referred to as the null
hypothesis) and then using data to generate a statistic whose value will be within defined limits if the
hypothesis is highly likely (or true!). Note the use of the term highly likely; statistical tests rarely lead to
absolute certainty, rather, they allow a person to make a statement such as the result we have obtained is
within the limits of what would be expected 95 percent of the time if our hypothesis is true. This is
comparable to reporting a result of a national poll stating that 45% answered yes to a question, so the true
figure is within 3% of the value 45%, 19 times out of 20. In our terms, the mean value is 45%, the standard
deviation is 3/2 = 1.5% and 19 times out of 20 is equivalent to 95% i.e., 95% of the time the true mean value
of the poll will lie in the range 42% to 48%. But, note that there is a 5% chance of making a wrong decision,
i.e., the true value has a 2.5% chance of being higher than 48% and a 2.5% chance of having a value less than
42%. This chance of being wrong is what statisticians commonly refer to the power or level of significance of
a statistical test i.e., a test is significant at the 0.05 level, where 0.05 is the proportion of chances of being
wrong (equivalent to 5% chance of being wrong). Of course, the test could be done at some other level of
significance, say 0.01 or 0.1. However, tests of significance in sampling and analytical work in the mining
industry are commonly conducted at the 0.05 level and that will generally be the level used throughout this
course.

Figure 3.2: Example of a normal (Gaussian) probability density function (pdf). The curve is symmetric
about the mean value of m = 0.76%Cu. Spread (dispersion) is measured by the standard deviation (s
= 0.28 in this case). Note that there are inflections in the curve at m+s. Two arbitrary Cu values are
shown; these can be divided by s to give a z-value and the proportion of area under the curve lower
than those values can be derived from tabulations.

3.2 F-testComparison of Variances


The F-test is a test to evaluate whether or not two sets of data have variabilities (variances) that could
both be expected results of samplings of the same parent population. The so-called F-distribution is the
distribution of ratios of the variances of two sets of values (statistical samples) drawn from the same parent
normal distribution. Consideration of an example will illustrate the test. Fletcher (1981) documents a situation
where 56 samples have been analyzed for gold by each of two labs using different analysts and different
analytical techniques. The data are illustrated as histograms in Figure 3.4. The question to be answered is

QA-QC in Mineral Exploration

29

30
whether or not the two labs/techniques produce data of equivalent quality. Part of the answer is to determine
whether or not the variability in the analyses of both labs could be expected sampling variability from the same
parent population. Here, we will hypothesize (null hypothesis) that the two variances are equal, i.e.,
though numerically different, they are two possible sampling outcomes from the same parent population. To
test

Figure 3.3: An example of the F-distribution, typically positively skewed. The value fa is the critical
value that is tabulated for the particular degrees of freedom (n 1) on which the distribution is based.
is the probability of being wrong in hypothesis testing with the F-function, often taken as 0.05.
this we will use the ratio of variances by the two labs and conduct an F-test. The F-distribution (cf. histogram of
variance ratios drawn from a normal population) is known (tabulated in statistical texts), so we know the
probability with which values will occur for any range of F values. In this case we conduct the test at the 0.05
level and use a 2-sided distribution, that is, we distribute the 5% chance of being wrong equally on the two tails
of the F-distribution. This gives us two critical F-valuesif the data provide a calculated F value between these
two critical values we accept the null hypothesis. Otherwise we reject the null hypothesis and conclude that the
two labs indeed have different random errors, in which case, the lab with the greatest variability is the poorer
quality of the two.
Details of the calculation: The standard deviations of the analyses by the two labs are s1 = 0.33 and s2 = 0.24 to give variances
of s12 = 0.1089 and s22 = 0.0576. If the two estimates of variance represent the same parent normal distribution, their ratio
should lie within an expected range of F-values characteristic of a normal distribution. These values, determined from
tabulations in many statistical texts, are approximately 1.68 and 0.065 (these values are the 2.5 and 97.5 percentiles of the Fdistribution with 55 degrees of freedom for both variables, so we are conducting a statistical test at the 0.05 level). Fcalc =
0.1089/0.0576 = 1.89 which lies outside the expected range of values. Consequently, we conclude that the two variances are
highly unlikely to represent the same population. In other words, one lab/analytical method has a larger random error than does
the other labin this case, lab 2, with lowest standard deviation, has the lowest random error.

This simple example illustrates how variability in the analyses of standard samples can be used to
measure relative quality of labs. This is particularly true where the labs in question do not realize which, of the
samples submitted to them, are standards. When a lab knows that a particular sample is a standard the lab might
be prone to give the sample special attention and thus produce better results than would apply to routine
samples. It is common to always put the larger variance in the numerator to always give an F-value greater
than 1.0hence, a single, critical value of F for the 0.05 level is used to compare with a calculated F-value. In
particular, a calculated F-value less than the critical value means that we accept the null hypothesis(i.e., we
conclude that the two variances represent the same parent population), where, a calculated value greater than
the critical value means that we reject the null hypothesis (i.e., the two variances do not represent the same
parent population).

QA-QC in Mineral Exploration

30

31

Figure 3.4: Histograms of 58 samples analyzed for Au by two different analysts using different analytical
techniques after Fletcher (1981).

3.3 Students t-testcomparison of means


In addition to testing the variability of two sets of data, it is commonly important to determine if the
mean values are equivalent i.e., in much quality control work, to check that there is no evidence of bias between
the two mean values. For a normally distributed population, such a test makes use of the t-distribution. Given
that two sets of data are available, an F-test is conducted to determine whether or not the variances are the
same, and if so, the data are used to calculate a t-value that can be compared with an expected range if the two
data sets were drawn from the same parent normal population.
t = (m1 m2)/[(s12/n1) + (s22/n2)]1/2

(3-3)

Figure 3.5: An example of the t-distribution. Note the symmetry. In hypothesis testing the error in the
test is distributed equally to each tail. The critical values ta and t1-a are the same in absolute value.

QA-QC in Mineral Exploration

31

32

Critical t-values are determined from tables in many introductory statistical texts using a level of significance,
, and degrees of freedom, df, determined as follows:
df = n1 + n2 2

(3-4)

For example, if n1= 16, n2 = 9and = 0.05, then df = 23 and the critical t-value (from tabulations) is tcrit =
1.714. If a calculated (absolute) value of t is less than the critical value, the null hypothesis (means are the
same) is accepted, otherwise, the hypothesis is rejected and the means are said to be different at the 0.05 level.
For the null hypothesis that the two means are identical, but where the variances have been shown to
be different (by an F-test), the t-value is calculated as in the case for equal variances (equation 3). Degrees of
freedom, df, are determined as
df = [(s12/n1) + (s22/n2)]2 /[ (s12/n1)2/n1 + (s22/n2)2/n2]

(3-5)

This value for df will not necessarily be an integer but interpolation can be easily done in using it to estimate
critical t-values from published tables that are available in most introductory statistical texts.
Consider a comparison of mean values in the two sets of data used above in illustrating an F-test and
shown as histograms in Figure 3.4. The F-test demonstrated that the variances for the two labs/methods were
significantly different.
Details of the calculation: The mean values reported by Fletcher (1981) for the two sets of analyses of
a common standard sample by the two labs are m1 = 4.16 g/t and m2 = 3.43 g/t . Degrees of freedom
are calculated by equation 6 to be73.1 (In this case the calculation is unnecessarythe calculated
value for df will be higher than the lowest n which is 56 (a value above 30) and the critical t-value
defaults to 1.96 for an infinitely large sample). The calculated t-value tcalc = (4.16 3.43)/[(0.332/56 +
0.242/56]1/2 = 13.4, very much larger than the critical value of 1.96. Consequently, we conclude that
the two means are highly likely to be different.

3.4 Paired t-Tests


Global bias between paired data can be tested using a simple statistical test known as the paired t-test.
In general, if no bias exists between pairs of a set of duplicate analytical data, the differences between
corresponding values should average zero. In other words, a histogram of differences should be centred close to
zero and, with good quality analyses, the spread of differences either side of zero should be relatively slight and
more-or-less symmetrically distributed about the central value.
The paired t-test involves calculating the differences for paired values, and determining the mean and
standard deviation of those differences. These parameters can be used to test whether or not a confidence
interval of differences contains the value zero. If zero is within the 95% confidence range of the mean value of
differences, then the null hypothesis is accepted, that is, the calculated mean differences is a possible sampling
outcome of a true difference of zero for = 0.05.
The paired t-test relates particularly to normal distributions. However, the law of central tendency says
that where such a test is based on mean values, the distribution of mean values tends toward a normal
distribution for large numbers (n>30) for all distributions, particularly if the distribution is symmetric. The
paired t-test is an alternate but equivalent way to the classical t-test, to compare mean values but necessitates
that the data are paired (which is not a requirement of the classical t-test described previously). The paired t-test
is conducted by averaging the differences in paired values, determining the standard deviation, calculating the
standard error of differences and defining the confidence limits on the mean difference. If zero is included in
the range d 2se to d + 2se, the null hypothesis is accepted (i.e., the calculated average difference is an

QA-QC in Mineral Exploration

32

33
expected sampling variation from zero and cannot be distinguished from zero); if zero is not included in the
95% confidence range of the mean difference, then the mean difference is statistically different from zero and
bias exists between the two sets of data.
se = ( sd /n1/2)

(3-6)

The previous example of a t-test for the data illustrated in Figure 3.4 could be evaluated with a paired t-test if
the original data pairs were available and differences could be calculated for each pair.

Figure 3.6: Histogram of 412 differences in paired gold assays for two labs, A and B. The average
difference is 231 ppb Au and the standard deviation of differences is 1210.
As an example of a paired t-test consider the data of Figure 3.6 which shows a histogram of the
differences in Au analyses by two labs, A and B, for 412 samples. The question to be asked is whether or not
the differences (average diff = 231 ppb, standard deviation = 1210) are of a magnitude to be expected or
whether there is a global bias between the two labs? The standard error is se = 1210/412 = 59.6. Confidence
limits on 231 are 231 + 2x 59.6 = 231 + 119, that is 112 to 350, a range that does not include zero.
Consequently, a global bias is demonstrated with lab B measuring higher than lab A by 231 ppb, on average.
Another way of expressing this global bias is as a percentage of the mean of the original valuesif the mean
value of the data is 2.0 g/t Au (i.e., 2000 ppb), then the global bias is 100 x 231/2000 = 11.6%. While these
figures prove a significant bias, on average, between the results of two labs, it must be recognized that the
local bias can vary with different gold concentrations.
In conducting both the t-test and paired t-test one must be careful in advance to eliminate outlier
values; these are generally easily recognized as values extremely far out on the limbs of a histogram of
differences and can also generally be identified on a scatter diagram of the duplicate analyses.

QA-QC in Mineral Exploration

33

34
A practical problem in conducting the t-test using assay data is that data commonly are highly
concentrated at low values which have a substantially different spread of differences compared with higher
values. Consequently, to apply the t-test fairly it may be necessary to subset the data into groups, each with
more-or-less uniform dispersion over the grade range of the group. Subdividing the data is a subjective
procedure although appropriate threshold values are commonly fairly evident from visual examination of a
scatter diagram. The effect of grouping all the data together is that n is very large and generates small values of
tcalc that fail to identify bias over part of the total range of data.

Figure 3.7: Schematic subsetting of data to compare averages by the application of a threshold to
omit lower values. The ellipses represent a field of plotted paired data. A and B illustrate a threshold
(solid line) applied to ordinate and abscissa respectively, to provide a biased comparison. D shows
the threshold applied to both abscissa and ordinate to give a fair comparison of values.
Another potential problem in dealing with paired data can arise where the two average values below
(or above) a threshold value (i.e., the averages of both the x- and y-values) are to be compared. There are many
practical situations where the two averages are expected to be more-or-less the same. If the two averages are
not the same it may indicate the presence of a bias between the two sets of measurements. It is common
practice, for example, for only paired values above some threshold (e.g., cutoff grade) to be compared. This
comparison of average values can be biased if the limiting grade (threshold) for a subset is applied to one
member of the pair only (see Figure 3.7). To offset this bias, any cutoff grade used to limit a subset of paired

QA-QC in Mineral Exploration

34

35
data should be applied to both members of the pair, or, better still, the data should be subsetted based on the
average value of a pair. In Figure 3.7 the reason for bias if the threshold is applied to the data of only one lab is
obviousvalues below the threshold are allowed for one lab but are preferentially excluded for the other.

3.4.1 Example: Silbak Premier Blasthole Data

Figure 3.8: Histogram of differences in gold analyses of two independent samplings of piles of
blasthole cuttings, Silbak Premier gold mine, British Columbia. The tabulated data are sufficient to
conduct a paired t-test.
This example is modified from Sinclair and Bentzen (1998) and their paper can be examined for more
detail. A histogram of differences for duplicate blasthole samples analyzed for gold, is plotted in Figure 3.8. For
these data (n = 125, one extreme outlier has been omitted) the mean difference is 0.397 g/t Au with a standard
deviation of 3.04. This information provides a t-value of
t = (0.397 0)/(3.04/1251/2) = 1.46
The t-value is well below the critical value (e.g., Walpole and Myers, 1978) of 1.96 so no global bias can be
identified. An equivalent result is obtained if 3 influential values are omitted
A second useful test can be applied to the data. One can count the number of paired samples that plot on
each side of the y = x line. In an unbiased sample roughly equal numbers of data should plot on each side of the
line just as with a large number of coin tosses one expects heads to appear about half the time. In general, the
number of positive differences gives the number of samples on one side of the line; clearly, all remaining
samples are on the other side of the line (except for equal valued pairs). For example, if there are a total of 122
data points (or coin tosses) the expectation is that 61 of them will, on average, plot on one side of the y = x line
and 61 will plot on the other side. The sampling distribution of this mean value is binomial (closely
approximated by a normal distribution) with a standard error of (npq)1/2 where n = 122, p = 0.5 (the proportion
of positive values) and q = 0.5 (the proportion of negative values). Consequently, the 95% confidence limits for
a mean value where n = 122 is 61 + 2(npq)1/2 = 61 + 11. For the Silbak Premier blasthole data there are 78
positive differences in the data set of 122 values, a number that is well outside the expected limits of 50 to 72.
Hence, we can conclude that there is an abnormal distribution of plotted values relative to the line y = x (and
probably a small bias) even though the paired t-test was not able to identify a bias.
A second example is illustrated in Figure 3.7 involving original and check Ag analyses for Equity
Silver mine. Of the 23 values, 21 plot on one side of the y=x line. Because of the distribution of values, the ttest does not indicate bias. However, the chances of getting such an extreme distribution of data about the y = x
line is highly unlikely. Confidence limits on the expected value of 11.5 are +2(npq)1/2 = 2 (23x0.5x0.5)1/2 =

QA-QC in Mineral Exploration

35

36
+4.8. Hence, 95% of the time, the distribution about the y = x line should be no more extreme than 11.5 + 4.8,
i.e., 6.7 (say 6) and 17.

Figure 3.7: Check of mine analyses for silver by an independent lab, Equity Silver mine. C =
concentrate; O = ore; T = tails.

3.5 Significance of r, the correlation coefficient


In quality control work the correlation coefficient is too often used in an inappropriate manner both in
dealing with a single set of paired data (e.g., evaluating two different analytical methods applied to the same
pulps) or in comparing two different sets of paired data. Where different samples and different numbers of
samples have been analyzed by two different labs the correlation coefficient (two or more coefficients where
multiple sets of data are involved) is not an appropriate means of comparing the quality of the data sets.
Providing precisely the same samples have been analyzed in either of the foregoing cases, the correlation
coefficient is meaningful and, in the latter case, it can be useful to compare or contrast the relative quality of
two different labs/analysts/methods.
Different numbers of duplicate samples in two data sets leaves open the possibility that one set has a
very different range of concentrations than the other and/or that the density of values in one set is very different
from the other. Both of these situations affect the correlation coefficient, possibly to a very large degree. Hence,
in general, the correlation coefficient is an undesirable measure for the purpose of comparing one set of paired
data with a second set of paired data. An added problem is that the correlation coefficient is complexly related
to error and can mask details of the errors present in any set of paired data.

3.6 Statistical Tests Involving the Linear Model


This section on equations for the simple linear model is a repeat of some of the material in an earlier
section entitled A Practical Linear Model but is included here for easy reference to the equations. Those
wishing to omit a review of the equations should move directly to the end of this section (to the next subsection
entitled Significance Tests, where specific uses of the various statistical parameters are described.
The general form of a Reduced Major Axis line fitted to a set of data is:
_
_
y = bo + b1x + e

(3-7 )

where x and y are the duplicate (paired) analyses, bo is the y-axis intercept by the RMA linear model and b1is
the slope of the model. For a set of paired data, b1 is estimated as
b1 = sy/sx

(3-8)

where sx and sy are the standard deviations of variables x and y, respectively, and bo is estimated from

QA-QC in Mineral Exploration

36

_
_
bo = y - b1x

(3-9)

_
_
where y and x are the mean values of y and x, respectively. Commonly we are interested in whether or not the
fitted line (model) passes through the origin because, if not, there is clearly a fixed bias of some kind. The
standard error on the y-axis intercept, so, is given by
so = sy{([1-r]/n)(2 + [x/sx] 2[1+r])}1/2

(3-10)

where r is the correlation coefficient between x and y. If bo + 2so contains the value zero then the calculated
value is a likely sampling variation of a true value of zero.
Similarly, we commonly want to know if the fitted line has a slope of 1.0, i.e., the paired values, on average, are
equal. The standard error on the slope is
ss1 = (sy / sx )([1 - r2]/n)1/2

(3-11)

If the range ssl + 2ssl includes 1.0, then the calculated value of the slope is not statistically distinguishable from
the value 1.0 at the level = 0.05. Note that if we are checking for the coincidence of a fitted line with the y
= x line then we must be able to show both that the intercept is a likely sampling outcome of an average value
of zero and that the calculated slope is a likely sampling outcome of a slope of 1.0.
The dispersion Sd about the reduced major axis is
Sd = {2(1-r)(sx2 + sy2)}1/2

(3-12)

Note that this is a combination of errors in both y and x as follows:


Sd2= sxp2 + syp2

(3-13)

where sxp and syp represent the average precision of x and y respectively as one standard deviation. It follows
that if x and y represent the same conditions (i.e., same lab and same methodology such that sxp and syp are
estimates of the same precision) the average error of the procedure (as a variance), savg2, can be determined as
follows:
savg2= Sd2/2

(3-14)

These errors can be taken as normally distributed (cf. Miller and Kahn, 1962) The dispersion about the RMA
line can be used in several practical comparisons including (1) the comparison of replicates of several standards
by one laboratory with replicates of the same standards by another laboratory; and (2) the comparison of interand intra-laboratory paired analyses for routine data spanning a wide range of values. Similarly, it follows that
where x and y values represent two different labs (a common situation with much paired data in the mining
industry), the average precision determined from such duplicate data is not the precision of either lab but is a
form of average precision of the two labs. Such paired data, by themselves, contain no basis for determining the
average precision of either of the two labs in question.
Because the errors are normally distributed, the estimate of average precision (one standard deviation)
can be used to estimate the average, absolute average difference between pairs of values, mad, as follows:
mad = 0.8 savg

QA-QC in Mineral Exploration

(3-15)

37

The average absolute difference is a useful parameter because it quantifies the differences to be expected
between pairs of duplicate data.
Of course, it is relatively easy to calculate the mean absolute difference directly from a set of paired data;
however, equation 16 turns out to be an extremely good estimate in practice and is easily obtained by
calculation.

3.6 1 Significance Test


In recognizing and quantifying error it is important to examine paired data for bias first because the
presence of bias can limit the manner by which precision is quantified. The presence of a fixed bias is indicated
if the slope is equal to 1.0 and the y-intercept is statistically different from zero. With large numbers of data
these two tests can be made (1) by examining the error limits on the calculated slope value. If the 95%
confidence range for the slope value contains the value 1.0, then the slope cannot be distinguished from 1.0,
and (2) examining the error limits on the y-intercept. If the 95% confidence limits do not include zero, then a
fixed bias is present.
The presence of a proportional bias can also be checked easily by demonstrating two requirements,
viz. (1) that the 95% confidence limits on the y-intercept include zero (i.e., the y-intercept is a possible
sampling variant of a true value of zero), and (2) the 95% confidence limits on the slope do not include 1.0 (i.e.,
the slope is significantly different from 1.0).
Often the slope is found to differ significantly from 1.0 and the y-intercept is significantly different
from zero. This is the general case in which components of both fixed and proportional bias are present in the
paired data. In all cases with which bias is demonstrated by the simple linear model, additional information is
required from standards and/or other check labs in order to determine which, if either, of the two components of
the paired data are correct.
Of course, cases more complex than those described above can emerge. For example, the statistics
might show that the y-intercept is significantly different from zero and the slope is significantly different from
1.0, but the two conditions are in opposition to each other. That would be the case if the y-intercept were
negative and the slope were greater than 1.0 (or vide versa, the y-intercept were positive and the slope were less
than 1.0). In such cases it is evident that a single linear model does not describe the error distribution well. It
might be necessary to subset the data and deal separately with individual subgroups. In the authors experience
this complexity is common where there is a serious discrepancy in paired data near the detection limit
removal of very low values at and near the detection limit permits the remaining data to be interpreted in a
straight-forward manner. Other causes of this complex pattern are possible, including having subsets of the data
with very different error characteristics. This can generally be spotted easily by examination of a scatter plot of
the data and the data can then be subsetted appropriately for purposes of interpretation.

3.6.2 ExampleSilbak Permier Gold Deposit


An example is provided in Figure 3.8, where analyses of duplicate blasthole samples are plotted for a
set of quality-monitoring data for blasthole assays from the Silbak Premier gold deposit, Stewart, British
Columbia. A histogram of differences of these paired blasthole data is shown in Figure 3.9. In this example, the
two traditional least squares models (error attributed entirely to one variable) contrast strongly with the
centrally positioned RMA model. Parameters of the variables and the three illustrated linear models are
summarized in Table 3-1. In this case assume that the RMA model is appropriate for interpreting the
significance of the data. The intercept, bo = .173, has an error s = +0.324 so cannot be distinguished from zero.
The slope is close to 1 and likewise, cannot be distinguished from 1 statistically. The large dispersion of the
data is quantified, on average, by the error term for the linear model as Sd = +4.12 g/t, a remarkably large value
that leads to great concern about the blasthole sampling procedure. Since both x and y values were obtained by
the same sampling and analytical protocol, Sd can be used to demonstrate a total error in the protocol of 2.91 g/t

QA-QC in Mineral Exploration

38

(one standard deviation) and a mean absolute difference of 2.33 g/t. Blasthole samples are unbiased but have a
very large random error.
Close inspection of the diagram reveals that the dispersion of data is not uniform throughout the range
of the data. Specifically, for lower grade data the dispersion about the line is much less than for higher grades.
Consequently, formal statistical tests, as done above for illustration, are not appropriate; the data must be
subsetted (subjectively) and the linear model for each subset considered individually (see Figure sinben-1.bmp).
Note that such subsetting provides a better description of the paired data because examination of the scatter plot
for the entire data set clearly shows that the higher grade values are substantially more dispersed than are the
lower grade values.

Figure 3.8: Scatter plot of 122 duplicate blasthole samples analyzed for gold (g/t), Silbak Premier gold
mine, Stewart, B. C. Reduced major axis (RMA) and two traditional least squares linear models are
illustrated. Note the limited scatter of low values and the much greater scatter of high values,
suggesting the presence of two data subsets that should be investigated separately.
TABLE 3-1: PARAMETERS OF VARIOUS LINEAR MODELS SHOWN ON FIGURE 3.8
Model

y-variable

intercept
and error1

slope
and error1

RMA3

AUD

-0.173(0.324)

0.9515(0.057)

4.12

TLS3

AUD

0.677(0.191)

0.708(0.035)

2.66

AU
1.028(0.192)
0.782(0.038)
Values in brackets are one standard error
2
One standard deviation (corrected from Sinclair and Bentzen, 1998)
3
RMA = reduced major axis; TLS = traditional least squares

dispersion
about line2

2.80

QA-QC in Mineral Exploration

39

The data of Table 3-1 are sufficient to do a quantitative analysis of the relation between the duplicate samples
plotted in Figure 3.8. Consider TLS first; in both cases the y-intercept cannot be distinguished from zero
because the range +2s about the estimated intercept includes zero. On the other hand, in both TLS cases the
slope is statistically different from 1.0 because the estimated value +2s does not include 1.0. Consequently,
both TLS models lead to an interpretation of bias between the two samplings, interestingly, in opposing
direction; that is, in one model y is biased high, in the other x is biased high. These conflicting results should
clearly emphasize the error in using a TLS linear model in dealing with paired quality control data.
The RMA model can also be tested is the same manner as the TLS model; the intercept is seen to be
indistinguishable from zero and the slope is indistinguishable from 1.0. The dispersion can be used to estimate
the total error, st, as follows:
st2 = (4.12)2/2

to give st = 2.91 gAu/t

At an average grade of 7 g/t, this error translates into a sampling/analytical precision of 200 x 2.91/7 = 83%
(Note that precision as a percent varies with concentration). This precision is an average for the data of Figure
3.8 and is influenced strongly by the very wide scatter of the higher values (note the different symbols on
Figure 3.8 that subdivide the data into two groups with obvious different scatter). Consequently, the
calculations above are simply to illustrate procedure. It would be more appropriate to subdivide the data as
illustrated in Figure 3.8 and treat each of the two subgroups separately in the same manner as above as
illustrated in Figure 2.6. Note that the absolute mean difference of the sample pairs can be estimated as 0.8 x
st = 2.3. This value simple means that the average (positive) difference between paired sample values is 2.3 g/t;
some differences will be larger, some smaller.

QA-QC in Mineral Exploration

40

SECTION 4
4.0 PRACTICAL MEASURES OF SAMPLING AND ANALYTICAL ERRORS
4.1 The nature of errors
Errors fall naturally into two categories, random and systematic (biased). Random error is the moreor-less symmetric dispersion (spread) of individual measurements about a mean value; systematic error or bias
occurs where a set of analyses depart, on average, in a regular manner from the corresponding true or reference
metal contents. In sampling and assaying random error always exists and its average magnitude is relatively
easy to quantify. Systematic error might or might not be identifiable in a particular set of data and although
obvious in some cases, commonly will have to be identified by statistical test. The distinction between random
and systematic error is illustrated in Figure 4.1 where accuracy is shown by values (dots) that cluster about the
center of the target and bias is demonstrated by data clusters whose centers are removed from the target center.
In Figure 4.1 it is evident that a set of analyses can be (A) imprecise and inaccurate, (B) imprecise and
accurate, (C) precise and inaccurate, and (D) precise and accurate. Consequently, in properly understanding
errors in a data set, we must distinguish and quantify both the degree of accuracy (cf. bias) and the degree of
precision (ability to reproduce values). A more tradition view of accuracy and precision is illustrated in Figure
4.2. Both of these images of accuracy and precision are limited in that they provide a global view whereas,
practice has amply demonstrated that the nature of error can differ widely as a function of concentration.
Obvious explanations for this include the use of different methodology for different sample types, and the
presence of different styles of mineralization as an approximate function of concentration (e.g., low grade
disseminations versus high grade veinlets in gold deposits).

Figure 4.1: Graphic illustration of accuracy and precision. (A) imprecise and inaccurate, (B) imprecise
and accurate, (C) precise and inaccurate, (D) precise and accurate.

QA-QC in Mineral Exploration

41

Figure 4.2: Types of error in measurement data, random and systematic errors, each with narrow
(good quality) and wide (poor quality) precision. is the true population value that is being estimated
by the mean of a distribution of values shown by the symmetric curves (analogous to histograms).
The great majority of assay results are subject to random errors that can be described by a normal
distribution. As an example, consider a histogram of 29 analyses of Canmet standard CH-3 (reported value =
1.40 g Au/tonne) by one check laboratory, to which has been fitted a normal curve (Table 4.1 and Figure). The
normal curve and the histogram have the same mean and standard deviation (m = 1.38 g Au/tonne; s = 0.11). In
this example, the distribution of replicate analyses is well-described by a normal distribution. In fact, in the
great majority of cases error can is distributed normally. Exceptions to this situation, rare and generally
recognizable for assay data of mineral deposits, are summarized in Table 1 (Thompson and Howarth, 1976a).
TABLE 1: SOURCES OF NON-GAUSSIAN (NON-NORMAL) ERROR DISTRIBUTION*
2.
The sample is heterogeneous, the analyte being largely or completely concentrated in a small
proportion of the particles constituting the sample, e.g., tin as cassiterite in sediments.
3.
The precision of the (analytical) method is poor, and the calibration is intrinsically non-linear, e.g., in
the region of the detection limit of spectrographic methods, where the calibration is logarithmic.
4.
The concentration levels are within an order of magnitude of the digital resolution of the instrument.
For example, lead concentrations determined by atomic-absorption spectroscopy are commonly recorded as
integer multiples of 0.1 g ml-1 with no intermediate values. The final values, referring to the original samples
(after multiplying the instrumental value by a factor) take only discrete values, such as 0, 5, 10, 15,..ppm.
This custom produces a discontinuous frequency distribution of error.
5.
The concentration levels are near the detection limit, and sub-zero readings are set to zero. Alternately,
readings below the detection limit are set to the detection limit or recorded as less than..In this connection,
it is worth emphasizing that, while the idea of negative (or even zero) concentration has no physical
significance, a negative measurement of concentration is feasible and, when considered statistically (i.e., as an
estimate with confidence limits), meaningful.
6.
The data set contains wild results or fliers. These values can be distinguished conceptually from
ordinary random variations as arising from mistakes or gross errors in procedure. In short, they really belong to
a different population of results.
*Summarized from Thompson and Howarth (1976a)
Errors in assay data are more complex than the simple concept of Figures 4.1 and 4.2. A general model
for considering sampling and related errors based on duplicate analyses is summarized in Figure 4.3 (after
Sinclair and Blackwell, 2002). This model is concerned with paired data values where the expectation is that

QA-QC in Mineral Exploration

42

one set of values will, on average, be reproduced by a second corresponding set, providing there is no bias
between the two sets. For example, we might be comparing two analyses for each of 127 pulps; or, assays of
one half core might be compared with assays of corresponding second halves of core, etc. In each of these cases
we do not expect the paired values to be identical because of inherent errors but we generally hope that in a
large data set the paired values will be identical on average. The term identical on average means that
differences for pairs will have an average value of zero, which implies that no bias is evident.

Figure 4.3: A simple linear model to describe errors in paired data as a function of composition.;
Scales for x and y are equal interval. (a) Dispersion of paired data about the y=x line results from
random error in both x and y, always present. (b) Proportional bias plus random error produces a
linear trend through the origin with slope different from 1.0. (c) Random error plus a fixed bias
produces a line with slope =1.0 and a nonzero y-intercept. (d) A general model incorporates random
error, fixed bias and proportional bias to produce a linear array of plotted data with slope different
from 1.0 and a nonzero y-intercept. After Sinclair and Blackwell (2002).
If no bias is present in a graph of paired data, the data points will be scattered about the y = x line. This
scatter is the random error inherent in the paired data. In some cases the paired data points do not scatter about
the y = x line, but are displaced from that reference line. Two idealized situations are common,
(1) scatter about a line that passes through the origin but is not parallel to the y = x line (proportional bias as
shown in Figure 4.3b), and
(2) scatter about a line parallel to the y = x line but not passing through the origin (fixed bias as shown in
Figure 4.3c).
Of course, where bias exists, it can be a combination of both proportional bias and fixed bias, and a more
general model is required to explain the distribution of paired data as illustrated in Figures 4.3b and 4.3d. These
models all incorporate straight line relations on a graph of paired analyses. In rare cases, other mathematical
models than straight lines might be appropriate. In general, however, in modelling duplicate assay data, there is
no reason to suspect anything other than a linear model and the scatter of data is such that small departures
from a linear model generally cannot be identified with confidence.
One complication that does occur commonly, is the fact that very different error patterns arise for
different concentration levels. For example, in gold deposits, a background, disseminated mineralization can be
sampled in a representative fashion much better than can higher grade, more erratically distributed gold-bearing
veins. The two styles of mineralization (disseminated vs. erratic veins) can have very different character to their
errors. Similarly, for many metals, low-grade and high-grade values might be assayed by different methods and
thus have very different character to their errors. It is evident that there are good reasons for data to be divided

QA-QC in Mineral Exploration

43

into subsets, each of which should be analyzed separately for characterizing errors. From a purely practical
point of view, data might be subsetted on the basis of abundance/density of data in various concentration ranges
simply because low values are very much more abundant than are high values so that average errors inherent in
linear models for the entire data set are not representative of either high or low values. In brief, subsetting of
data for error analysis might be necessary because of
(1) different styles of mineralization,
(2) different sampling and/or analytical methods for different concentration ranges, and
(3) very different amounts of data for different concentration ranges.

Figure 4.4: Idealized examples of patterns exhibited on scatter plots of paired quality control data
incorporating sampling and analytical errors. (a) Random error plus outlier. (b) Two different random
errors as a function of composition, perhaps resulting from different analytical methods or different
styles of mineralization. (c) Random error plus proportional bias at low values, only random error at
high values, perhaps resulting from errors in calibration of standards. (d) Difference in random error
as a function of concentration, perhaps arising from disseminated versus nugget styles of
mineralization. (e) Difference in random error as a function of concentration, plus bias in the high
valued data group, possibly resulting from segregation during sampling or subsampling by one of the
operators involved in obtaining the paired data. (f) Proportional bias such as might arise by incorrect
calibration of a standard that was then diluted to form standards of lower concentrations. After
Sinclair and Blackwell (2002)
The simple linear model to describe error in paired data is widely used in the mining industry and
elsewhere and has the advantage of being both easily understood and easily implemented for quantitative
determination of errors. In general, it is not adequate to fit a linear model to data by eye. Instead, an appropriate
statistically sound model fitting procedure must be used, for example, an appropriate variation of what is

QA-QC in Mineral Exploration

44

known as least squares procedures. For modeling described here the use of traditional least squares fitting
procedures, in which all the error is attributed to one of the variables being compared, is not only inappropriate
but is incorrect and can lead to serious errors as we have seen in the previously described example for blasthole
sampling at Silbak Premier gold mine (Sinclair and Bentzen, 1998).

Figure 4.5: Tube-sampling Ag analyses (g/t) of 42 blasthole piles, Equity Silver mine, versus best
weighted value of the Ag content of the piles. Note the dispersion.
Consider a practical example of the application of the simple linear model from Giroux and Sinclair
(1986), illustrated in Figures 4.5 and 4.6. The example involves 42 piles of blasthole cuttings that were sampled
by two methods (tube and channel) and then sampled in their entirety. Ag assays were obtained for all 3 types
of samplings so that a weighted average grade (best) could be calculated for each cuttings pile. The two
sampling methods can then be compared individually with the best value. Because the abscissa is the same in
both cases, any difference in scatter about the best fit line can be attributed to differences in error between the
two sampling methods. The dispersion (standard deviation) for tube sampling is 26.2 g/t (Figure 4.5) whereas
for channel sampling it is 3.8 g/t. Clearly, there is substantially less scatter in the case of channel sampling and
one can conclude that channel sampling produces better results than does tube sampling. In addition, in both
cases the statistical data indicate there is no bias, only random error because the y-intercepts cannot be
distinguished from zero and the slopes cannot be distinguished from 1.0.

QA-QC in Mineral Exploration

45

Figure 4.6: Channel sampling Ag analyses (g/t) of 42 blasthole cuttings piles, Equity Silver mine,
versus best weighted value of the Ag content of the piles. Compare dispersion with that of Figure 4.5.

4.2 Relative Error


The relative error (er), that is, the standard deviation (se) of a distribution divided by the mean value
(m), is a simple and widely used statistic for quantifying average error.
er = se /m

(4-1)

Consider a simple example involving replicate analyses of a standard as summarized in Table 4-1. Both labs A
and B are slightly low, on average, for Authe reported value for CH-3 is 1.40 gpt. The average apparent
bias for Lab A is 1.368 1.4 = -0.032 gpt or 2.29%. The average apparent bias for Lab B is 1.378 1.4 = 0.022 gpt or 1.57%. Both of these apparent biases can be tested formally to determine if they are simply
examples of random error of a zero bias situation, or, are real. In either case, the amount is small and is within a
range that is generally acceptable in routine analyses by a commercial lab. In the case of Lab C, the average Cu
content is 0.829%, virtually identical with the reported value of 0.83%. Recall that precision can be determined
from the relative error, simply by multiplying by 200. Note the roughly comparable precisions for Au for Labs
A and B and the very much better precision obtained by Lab C for Cu (compared with Au). Precisions can
differ markedly for different metals even within the same standard! Traditionally base metals are estimated with
better precision than are precious metals.

QA-QC in Mineral Exploration

46

TABLE 4-1: STANDARD CH-3 Repeats for Au(LABS A AND B) and Cu


(LAB C)
AAugpt*
1.4
1.39
1.42
1.38
1.41
1.23
1.4
1.39
1.34
1.39
1.5
1.47
1.3
1.47
1.35
1.41
1.4
1.17
1.33
1.42
1.34
1.34
1.39
1.35
1.06
1.32
1.41
1.48
1.41

BAugpt*
1.46
1.48
1.4
1.61
1.5
1.25
1.45
1.28
1.39
1.23
1.31
1.44
1.27
1.34
1.26
0
0
0
0
0
0
0
0
0
0
0
0
0
0

CCupct*
0.83
0.82
0.84
0.83
0.8
0.83
0.82
0.84
0.82
0.84
0.83
0.82
0.83
0.84
0.84
0
0
0
0
0
0
0
0
0
0
0
0
0
0

1.368
1.378
0.829 Average
0.091
0.112
0.011 Stdev
0.067
0.081
0.013 Rel. Err.
13.4%
16.2%
2.6% Precision
29
15
15
n
*Variable name includes lab designation (A, B, C), metal (Au, Cu)
and units (grams per ton or percent). A zero value indicates not analyzed.

4.3 Mean Absolute Difference


The mean absolute difference of paired analyses is a useful way of summarizing duplicate analyses in
which no bias is recognized. To calculate this figure it is useful to examine a binary plot of the first set of
analyses versus the second set in order to establish that no bias is present. Then the differences in pairs of
analyses are determined, all are taken as positive values and an average is calculated. Where a set of differences
are summarized as a mean (indistinguishable from zero indicates bias is not evident) and standard deviation, sd,
and have a more-or-less normal distribution (a relatively common occurrence), the mean absolute difference,
mad, is closely approximated by
mad = 0.8*sd

QA-QC in Mineral Exploration

(4-2)

47

Note that the differences themselves can also be used to test for global biasthe average difference should not
be significantly different from zero or else bias exists.

4.4 Thompson and Howarth Error Analysis


The Thompson-Howarth approach to error estimation using duplicate analyses (Thompson and
Howarth, 1976a; 1976b, 1978) is widely used by applied geochemists in the consideration of errors in
geochemical data and is one of several methods in use for treating errors in assay data. Consequently, it is
important to recognize the advantages and limitations of the approach so that a thoughtful evaluation can be
made of the appropriateness of the method in individual cases.

4.4.1 Assumptions
The Thompson-Howarth method of quantifying error assumes (i) a normal distribution to the error and
(ii) that the relative error (i.e., s/m) in sampling and analysis changes as the true average grade increases
(Figure 4.7). The assumption of normality of error distribution is generally, but not always, met (see Table 1.1).
The assumption regarding change in error as a function of concentration, taken as a linear relation of absolute
error versus concentration, is more problematic. In general, experience demonstrates that average absolute error
does increase as a function of increasing concentration. However, it is also common that assay data are obtained
using different sampling procedures, different labs and different analytical approaches even within the same
lab, all of which contribute to complicating the pattern of error versus concentration if such data are considered
together.
The Thompson-Howarth method is restricted to a quantification of random error as a linear function
of metal concentration, bias is ignored and, in fact, is obscured by the method; consequently, some other
method must be used to test for bias. If bias is present in a set of paired data, the Thompson-Howarth method is
an inappropriate means of quantifying error. In addition, the Thompson-Howarth method is most appropriate
for data from the same laboratory, hence, it is generally an unacceptable method for comparing data from two
different labs. The reason is that any error quantified by the T-H method is an average of the errors of the two
components of paired data. Where the data represent a single lab, the T-H method produces an estimate of the
average random error for the particular sampling and/or analytical protocol used by that lab; where the data
represent two different labs, the T-H method produces an estimate of the average error of the two labs (possibly
including a component for bias) whereas, what we really want to know is the error of the principal lab
producing the data in question and the presence or absence of bias between the two labs. Of course, where bias
is demonstrated between two labs, additional information is required in order to ascertain which if either lab is
correct.

4.4.2 The Method


The general method for determining precision using the Thompson-Howarth procedure is itemized
below. While reading through this procedure it is useful to make reference to Figure 4.7a and b.
1.

For a set of paired (duplicate) data, determine the mean concentration of each pair [(x1 + x2)/2] and the
corresponding absolute difference in concentrations (|x1 x2|).

2.

Arrange paired data in order of increasing concentration, using means of pairs. A minimum of 50 pairs is
recommended for geochemical data.

3.

Divide the full data set into successive ordered groups of 11 pairs (for geochemical data). That is, the first
eleven pairs in the ordered set is group 1, the second 11 pairs is group 2 and so on. The authors have found

QA-QC in Mineral Exploration

48

that for assay data (generally more accurate than geochemical data) as few as 7 pairs per group is
commonly adequate.
4.

For each group find the group mean value (i.e., mean concentration of the 11 mean values) and the median
value of pair differences (i.e., the middle value of the absolute differences).

Figure 4.7a: From Thompson; and Howarth (1973). Illustration of the T-H method of estimating
precision for geochemical duplicate data. Vertical dashed lines divide the data into subgroups of 11.
For each subgroup individual data items are shown as open circles and an average of each group
(black circle) is the average grade of the 11sampale pairs versus the median difference of sample
pairs. A line is fitted to the average values representing average error as a function of concentration.
See text for details.

5.

Plot the coordinates of these two parameters on an xy graph and pass a straight line through the points.
Because there are so few points it is generally unwise to use a least squares fitting procedure (unusual line
equations can result), but is better to fit a sensible line by eye.

Examples are shown in Figure 4.7a and Figure 4.7b. The example of Figure 4.7a illustrates a least squares line
fitted to the median/mean values for each of the groups of 11 analyses (separated by vertical dashed lines) for a
very well-behaved set of data. Generally, it is not wise to use a least squares fitting routine because with a small
number of points controlling the line (9 in the case illustrated) a single point that is out-of-line can lead to a
peculiar and unrealistic equation. Consequently, it is preferable to fit a line to the median/average values by
eye. The problem arises in Figure 4.7b where only 5 points control the least squares lineit is evident that the
four lowest points describe a line substantially different from the least squares line illustrated; and that the
highest point has an adverse effect on the model for error. In this latter case is seems that a single linear model

QA-QC in Mineral Exploration

49

does not represent the data well, a situation that can be taken into account easily manually but which cannot be
dealt with in a computer routine involving a single linear model.

Figure 4.7b: Sampling and analytical errors of gold analyses as a function of composition based on
55 duplicate samples from the J&L massive sulphide deposit, southeastern B. C. Ordinate is standard
deviation of replicate analyses (or absolute difference of paired values); abscissa is average
concentration of replicate analyses. Filled squares are individual samples that have been replicated;
+ signs are median differences (or standard deviations) versus averages of successive groups of 11
sets of sample replicates. A linear model has been fitted to the + signs to describe error as a function
of composition. From Sinclair and Blackwell (2002).

4.5 The Simple Linear Model (Sinclair and Blackwell, 2002)


Ideally, with paired analyses of the same samples or duplicate samples, the hope is that the paired
results will be nearly identical, that is, spread closely about the y = x line on an x versus y plot. Of course some
random error is always present so at the very least there will be some scatter about the y = x line. This simple
pattern is not always met in practicethe amount of random error can be large or can differ for different
concentration ranges. In addition, bias may be present. The likely effect of these complications is that a variety
of relations can exist between the two variables as illustrated in Figure 4.3.This figure describes a simple
conceptual model for error in duplicate (paired) data. The simplest case is that described above, where data
scatter evenly about the y = x line (Figure 4.3a) because of inherent random error. If a proportional difference
exists between the two sets of data they are said to demonstrate proportional bias (Figure 4.3b). If a more-orless constant difference exists between the paired data (Figure 4.3c) the data exhibit fixed bias. In a more
general case, paired data exhibit all three components of error (random, proportional and fixed) as illustrated in
Figure 4.3d. Figure 4.3 is a useful, if idealized, conceptual model of error in duplicate data and is a particularly
useful standard against which to compare the patterns encountered with real data. Of course, additional
complications can arise, some of which are illustrated in Figure 4.4.

QA-QC in Mineral Exploration

50

The conceptual model of Figures 4.3 and 4.4 make use of a straight line describing the data.
Commonly, such a line has been generated as a statistical best fit using the method of least squares. Sinclair and
Bentzen (1998), among others, recommend that the particular linear equation used is to define the model should
be the reduced major axis solution, which has been described previously in the course.
The reduced major axis (RMA) linear model combines a standardization of the two variables (i.e.,
effectively divides each value by the standard deviation) and a major axis, least squares solution to determine
the statistical parameters of the linear equation. This procedure avoids any concern about widely different
errors in the two members of the pairs. An RMA regression is desirable when it is important that errors in both
variables be taken into account in establishing the relation between two variables. Many other writers including
Agterberg (1974), Till (1974) Davis (1986) and Miller and Kahn (1961), discuss the use of the reduced major
axis solution in the context of analytical data.
The simple linear model has been used to illustrate much of this course material to this point. Consider
a simple example to illustrate the interpretive procedure. The data are 74 pulps that were analyzed a second
time for platinum group elements by the same sampling and analytical protocol as a means of examining
analytical variability as a function of concentration. Let us follow through a detailed examination of the
resulting paired analyses for Pt using output from the P-res program.

1.

Examine the entire data set on an x-y plot . Note the presence of one extremely high outlier (224200,
237300 ppbagreement within 7%). The presence of the outlier strongly skews the linear model and
related statistics, to the point that statistics are meaningless relative to the great mass of data. It is important
to view the data with outliers removed in order to allow a fair interpretation of the bulk of the data.

2.

Remove the high outlier, for example, by setting lower and higher limits to the data that are displayed on
the x-y plot, and view the remaining data on an expanded scale . Note a group of the 8 highest values (Pthi)
that are widely dispersed along the concentration axes, and the remaining lower data that cluster along the
line near the origin (Ptlo). Form subsets of the two groups of data by fencing the Ptlo values (with P-res)
and moving them to a new symbol.

Figure 4.8: Scatter plot of eight highest platinum values except for one outlier removed. See text.

QA-QC in Mineral Exploration

51

3.

Examine the high Pt group alone by turning off the symbol for Ptlo (Figure 4.8). Note that the linear model
for the 8 high values indicates that the slope is different from 1.0 and the intercept is different from zero,
implying the presence of bias. This would seem to indicate that the two sets of results are not in good
agreement. However, note that the intercept is positive and the slope is less than 1.0 so the two effects
partly compensate for each other. In fact, examination of the plotted points shows that the lowest 6 are well
described by the y = x line; and the upper two are below the line and control the model, imposing both the
low slope and the positive intercept on the model. The two highest samples do not represent a satisfactory
representative sampling of values near 2000 ppb so we can conclude that there is no evidence of bias. As a
further test we could conduct a paired t-test. The mean value of original minus duplicate is 23.25 ppb
with a standard deviation of 74.46, giving a standard error of 74.46/8 = 26.3. Zero is contained within the
95 percent confidence limits of the mean, indicating that no bias is evident.

Figure 4.9: Scatter plot of 65 paired low platinum assays for pulps. See text.
4.

5.

Examine an expanded plot of the low Pt data by centering on the cluster and zooming in (Figure 4.9). The
plot shows the RMA line very close to the y = x line. The statistics of the RMA line indicate that the slope
is slightly but significantly different from 1.0 (a 3% bias is indicated) and the intercept cannot be
distinguished from zero. Consequently, we conclude that there is a small bias between the two sets of data,
of an amount that is commonly encountered both between and within labs.
In summary: a single high outlier is reproduced satisfactorily, a group of relatively high Pt values (100 to
2400 ppb) is reproduced acceptably with some indication that values near 2000 ppb should be closely
monitored, and low Pt values (1 to 100 ppb) are duplicated satisfactorily despite the presence of a 3% bias
between the two sets of data.

Note that in this interpretive procedure, the simple linear model has been applied independently to each of two
subgroups of data. The subgroups are defined on the basis of data density.
Consider the Ag analyses of Figure 4.5, in particular, the top ten values reproduced in Figure 4.10. These data
are well fitted by a linear model, the parameters of which are shown on the figure.

QA-QC in Mineral Exploration

52

There are situations where it is inappropriate to apply a linear model to duplicate analyses, in
particular, where there are very few pairs of duplicate data. Some approaches to dealing with such situations are
summarized in the following table from Stanley and Lowie (2007) which shows the close relation of a variety
of formulae that have been used in the past.

Measuremen
t
Coefficient
of Variation

Conceptual Formula

CV =

Relative
Precision

RP =

Relative
Variance

2
RV = 2

Absolute
Relative
Difference

ARD =

x1 x2

Single Duplicate Pair


Formula

Relationshi
p with CV

CV =

2 x1 x2
2 ( x1 + x2 )

1 n 2 x1i x2i

CV =

n i =1 2 ( x1i + x2i )

RP =

4 x1 x2
2 (x1 + x2 )

1 n 4 x1i x2i

RP =

n i =1 2 ( x1i + x2i )

2
(
x1 x2 )
RV = 2
(x1 + x2 )2

ARD = 2

x1 x2

(x1 + x2 )

Half
x x
1 x1 x2
HARD = 1 2
HARD =
Absolute
2
(x1 + x2 )
Relative
Difference
From Stanley and Lawrie (E&MG, v. 6, 2007)

QA-QC in Mineral Exploration

Average Formula for


Several Duplicate Pairs
2

CV

2 CV

2
1 n ( x x2i )
RV = 2 1i
n i =1 ( x1i + x 2i )2

1 n x1i x2i

ARD =
2
n i =1 ( x1i + x2i )

CV 2
2

2 CV

2
CV
2

1 n x1i x2i

HARD =

n i =1 ( x1i + x 2i )

53

SECTION 5
5.0 SOURCES OF ERRORS: SAMPLING, SUBSAMPLING AND ANALYSIS
5.1 Introduction
The quality of assay data is irrevocably tied to the design and implementation of appropriate sampling
and subsampling procedures or protocols. Samples are, first and foremost, to be taken in a fashion such that
they are fair representations of larger volumes of in situ rock or fragmental ground (Vallee, 1998). Then the
samples must be treated in a manner that maintains the integrity of the metal content throughout sample
handling and eventual analysis. Duplicate sampling data generally do not provide a clear indication of the
representativeness of samples to a much larger volume of rock; that test is best made with a bulk sampling
program. Duplicate cores (adjacent/facing half core samples) do provide insight to very short range variability
(the nugget effect) but give no indication of the relation of more widely spaced (5, 10, 50 meters) samples; thus,
the duplicate core data do no more than provide a minimum insight of the variability that exists between
samples and much larger volumes of ore. In practice, this larger range variability is best evaluated by bulk
sampling although, if a semivariogram model is known, geostatistical methods are available to examine such
variability theoretically.

5.2 Sampling
The term sample can mean different things to different people, particularly statisticians and
geologists/miners. A sample to a statistician generally means a relatively large number (n) of items for which
some quality has been measured (e.g., 283 assay values for 2m half core samples). On the other hand, to a
geologist a sample is the amount of rock or fragmental material that has been taken for purposes of analysis to
determine an estimate of metal contentthe metal content is one item in a statistical sample. In the former
meaning a sample is many values, in the latter, a single value. Statisticians use terms to describe various
distribution patterns to items (geological samples) that make up their samplerandom, random stratified, etc.
There is a tendency for practical geological sampling to be classed as a modified, regular sampling pattern
because there commonly is an underlying regularity (e.g., drill holes located at 50m centres) modified by some
additional irregularly located drill holes that have been positioned for a variety of reasons (e.g., to establish
rock characteristics, metallurgical character, geological continuity, etc.).
Samples or rock material of the order of 0.5 to 5 kilograms, are taken routinely by a variety of
procedures, to represent a very much larger mass of material, commonly 10,000 to 100,000 times larger than
the samples themselves. This general situation is true of much routine work in the mining industry.
Consequently, samples must be taken in a manner that is as representative as possible, and, of course, each
sample must be treated subsequently in a manner that maintains the integrity of the overall metal content as the
sample is reduced in quantity to an amount that is amenable to chemical or instrumental analysis.

QA-QC in Mineral Exploration

54

Sampling methodology is not a major topic for this course, but is the essential first step in a series of
undertakings designed to produce an assay value that is as representative of a larger mass of rock, as is
reasonably feasible. Consequently, errors implicit in the sampling procedure must be as small as can be
reasonably expected. Sampling error can only be monitored if a sampling program involves the analyzing of
duplicate of samples, duplicates of subsamples and duplicate pulps, preferably all taken at the same time as the
original samples. Duplicate samples are taken through the subsampling and analytical procedures
independently of the original samples so contain errors from both subsampling and analysis as well as the
fundamental sampling error itself. Each of these errors can be expressed as a variance and their sum represents
total error in a data setrecall the fundamental relation of errors, as follows:
s2total = s2sampling + s2subsampling + s2analytical
Sampling techniques and related quality control aspects are discussed by Vallee (1998) and Sinclair
and Blackwell (2002); Vallees classification is summarized in Table 5-1.
TABLE 5-1: CLASSIFICATION OF SAMPLE TYPES

Sample Type

Description

Point

A prospectors grab sample or a geologists specimen. Specimens are chosen to


represent particular features of a mineralized or barren rock; hence, they are only
accidently representative of metal grade. Consequently, point samples cannot be
used for grade interpolation although they can indicate continuity of
mineralization. Their size typically may vary from 0.2 to 1 kg.

Linear

Linear samples have one very long dimension relative to the other two; they
include channel sammples, linear chip samples, drill hole samples (core and
cuttings). They commonly range from 0.5 to several kg.

Panel

Panel samples (or planar samples) are made up of multiple chips or fragments
collected from a surface such as the wall or roof of a drift, raise or stope They
commonly range from 1 to 5 kg.

Broken Rock

Rock fragments (muck) from surface trenches, drifts, slashes, raises, etc. Such
samples are commonly collected during exploration, deposit appraisal and mine
development. The source mass may vary from hundreds of kilograms to
hundreds of tons but generally the samples collected are miniscule in
comparison, varying between 2 and 10 kg.

Large Volume

Bulk and trial stopes are examples of large volume samples that commonly
range from hundreds to thousands of tonnes. Bulk samples often are obtained
from various surface or underground workings.

QA-QC in Mineral Exploration

55

Figure 5.1: A typical sampling/subsampling protocol for drill core.

5.2.1 Chip Samples


Chip samples are rock/ore fragments that are collected at intervals along a line or some other
appropriate pattern on a rock surface or pile of broken material. Chip samples are susceptible to bias from the
samplers preference for larger or smaller fragments, localized rather than representative fragments or other
personal preferences. Chip samples might not be representative of certain ores that feature a mixture of
minerals with great differences in hardness or physical properties such as cleavages unless great care is taken in
conducting the sampling procedure. This arises because any single chip can be biased because of an excess of
the soft (or highly cleavable) material. If the soft material is of value, the grade will be biased high; if the soft
material is gangue, the grade of the chip will be biased low. Simple chip sampling, therefore, may be
inappropriate for some ores. Where a face or exposure consists of discrete zones of hard and soft material of a
size such that each can be sampled separately, the individual grades can be weighted in accordance with the
relative amounts of the two, to produce an unbiased grade. Mechanical errors in chip sampling can be
minimized by making the sample as large as is feasible.

5.2.2 Channel Samples


Channel samples are linear samples collected from a rock surface by extracting all the material in a
predefined, elongate volume using implements such as hammer and chisel, compressed air chisels, rock saws,
etc. Hence, a channel sample is conceptually equivalent to a drill core. True channel samples are rarely taken
today because they are so labor intensive. Instead, much so-called channel sampling is, in reality, the
combination of a series of very closely spaced chips; consequently, such chips can be subject to bias as
discussed above.

5.2.3 Drill Core Samples


Drill core is one of the most fundamental types of sampling available for deposit evaluation because it
represents material from within a large rock mass that is otherwise unavailable for direct examination and

QA-QC in Mineral Exploration

56

assay. Generally, acceptable quality control procedures demand that the core be split along its axis with one
half being retained for future reference and the second half forming a sample for assay. The use of whole core
for samples should be avoided if possible. (Highly specialized sampling might include the assaying of entire
core as in the case of drilling purely for metallurgical purposes.) A range of core dimensions can be obtained.
Vallee (1998) indicates that BQ core size (36mm diameter) has replaced AQ (26mm diameter) as the industry
standard in Canada despite higher cost, because the larger diameter provides better core recovery and a larger,
more representative sample. Of course, in any particular deposit evaluation, what is considered an acceptable
core size depends very much on the rock character and grade variability in the deposit.
The smaller the core diameter, the greater will be the grade variability of half core samples;
consequently, the greater will be the variability between two facing half core samples using small core
diameter, relative to larger core diameter. The sampling variability between facing half cores can also be
increased somewhat where manual splitting gives widely varying masses to the two, so-called half core
samples (i.e., the two so-called half cores differ substantially in mass). This source of variability can be
overcome by using a diamond saw to cut the core along its axis to provide two half cores of equal size.
Variability of core sample assays can be decreased by increasing the mass of core in the sample. This
can be achieved by (1) using the entire core rather than half core samples (generally undesirable); (2) increasing
the length of core that comprises a sample (perhaps by constructing composites of shorter assay lengths), and
(3) increasing the diameter of the core recovered by drilling. For the most part, half cores should be retained.
The selection of sample length should be considered early in the exploration of a deposit. In general, during
early stages of exploration of a deposit, a shorter sample length can be desirable in order to understand grade
variability as a function of geology. Later, samples can be longer and the earlier, shorter sample values can be
combined (composited) to an equivalent length. Core size (diameter) can be controlled by the necessity of
having a high core recovery to provide faith in resulting assays. Otherwise, core diameter can be selected to
provide samples (normally half cores) that adequately represent the deposit.
Core recovery can be gauged approximately by a visual estimation during core logging. A preferred
technique is to weigh the core for a given length of drilling and compare the weight with perfect recovery. This
procedure requires that an appropriate bulk density be known. Note that such a procedure could occasionally
lead to calculated recoveries in excess of 100 percent (this arises from the use of average bulk density values
for samples that are substantially more dense). Where core recovery is low, assay values become suspect. What
an acceptable lower limit to core recovery will be, is dependent on the mineralogical characteristics of the core.
Soft and/or cleavable minerals can be lost preferentially due to impact and grinding of core pieces during
drilling. Those soft minerals might be either ore or waste. In some such cases both core and sludge can be
assayed and a weighted assay produced. Of course, this combined assay has a different support than do assays
based on half cores. It is useful in some cases to weigh both sludge and core to determine if, together, they
represent an adequate level of recoverythe procedure, however, is relatively costly in man-hours and may
require settling contains of a size not easily obtained.

5.2.4 Drill Cuttings Samples


Drill cuttings are fragmented and variably ground material that is the product of percussion drilling
rotary drilling and reverse circulation drilling. Reverse circulation drilling can use water or air returns, both of
which lead to samples that are vulnerable to losses and segregation, hence, to biases. In addition, the drilling
process itself can lead to sloughing of material high in the hole, into samples from lower in the hole and where
ground is relatively soft or partly so, the possibility of downhole contamination/dilution must be tested. Such
testing involves geologic/mineralogic examination of the cuttings as well as examination of grade profiles. All
drilling methods that produce cuttings as samples lack structural information about the rocks penetrated, and
much other geological information is difficult to extract from them. Most important from a quality point of
view, is that loss of material is not easy to identify unless cuttings are weighed so that sample mass can be
compared with a mass determined with a bulk density and hole volume. The various drilling techniques can
produce small to enormous volumes of cuttings (e.g., up to several tonnes in the case of a blasthole through a
10m bench height) that must be sampled for assaying. Sample size should be dictated by the application of

QA-QC in Mineral Exploration

57

quality control such as the use of Gys equation for fundamental sampling error. Application of Gys formula
helps to ensure that a sample is representative of a very large mass of cuttings. Proper application of Gys
fundamental formula implies adequate sample homogenization prior to each mass reduction stage.
Of course, cuttings piles are prone to stratification because of the way in which they accumulate, a
stratification that is the reverse of the true stratification in the hole. For example, if the bottom of the hole is
relatively rich in heavy sulphides then the top of the cuttings pile will be similarly enriched. This stratification
must be considered in adopting an appropriate sampling plan for cuttings piles, perhaps by riffling an entire
cuttings pile as part of the sampling protocol, or by taking a slice or pie-shaped segment of the pile. The widely
used tube sampling procedure (see Sinclair and Blackwell, 2002) generally, at best, will have a large random
error.

Figure 5.2: Surface view of a conical pile of blasthole cuttings sampled by 4 scoops (xs) from a tube
and 4 channels. Sampling is adapted to the asymmetry of the pile. The black circle is the blasthole.
Figure 5.3 compares results of a sampling experiment involving duplicate sampling of blasthole
cuttings piles (Figure 5.2). Tube sampling and channel sampling assays are compared with a best value
(weighted average of tube sample, channel sample and remaining material). Clearly, the dispersion about the y
= x line is less for the channel sample relative to the tube sample. In other words, the random error of channel
sampling is less than the random error of tube sampling.

QA-QC in Mineral Exploration

58

Figure 5.3a: Tube sampling results vs best value, Equity Silver Mine.

Figure 5.3b: Scatter plot of channel sampling results vs. best estimate, blasthole samples,
Equity Silver Mine.
Sampling tests are useful, even necessary, in adopting an appropriate sampling protocol for crushed
material. Figure 5.4 is an example of paired assay data for two different sampling procedures applied to reverse
circulation cuttings. A so-called regular procedure involved directing all cuttings to a cyclone and taking a
1/8th split as the sample, and a so-called total procedure which, in addition to a 1/8th split comparable to that
just described, also collected overflow from the cyclone and weighted that grade into the final assay that was
reported. A superficial examination of Figure 5.4 might lead

QA-QC in Mineral Exploration

59

Figure 5.4: Assay data for two sampling methods for reverse circulation drilling. See text for details.
one to the conclusion that there is good agreement between results of the two sampling procedures. However, 9
relatively high influential values have a strong control on the linear model. Clearly the 9 high values are
unbiased because they scatter either side of the y = x line. However, there is a strong concentration of data (75
of the 84 values) clustered near the origin and it is worthwhile to examine these data in expanded format. That
lower grade cluster is shown in Figure 5.5 where a strong proportional bias (Total samples assay about 26
percent less than corresponding Regular assays, on average) is demonstrated by the reduced major axis line. In
this case the conclusion that seems evident is that gold is not distributed evenly among the various size fractions
of the drill
cuttings; consequently, the Regular (routine) sampling procedure must be replaced by a more representative
procedure.

QA-QC in Mineral Exploration

60

Figure 5.5: Expansion of low grade reverse circulation sample data of Figure 5.4.

5.3 Subsampling
A mineralized sample commonly is a mixture of relatively large but variable-sized, solid fragments
that must be reduced in both particle size and mass to a small amount of finely ground material that is analyzed
to determine metal content. For example, a 2-metre length of half (split) BQ core has an ideal volume of
0.00104 m3 which, for a bulk density of 2.9 g/cc, translates to a mass of about 3000 grams, a quantity that must
be reduced by 2 orders of magnitude (to about 30 grams) for traditional fire assay. This overall procedure,
known as a subsampling protocol, involves a series of steps of alternating particle size reduction and mass
reduction that can be demonstrated by a simple example plotted on a sample reduction diagram (Figure 5.6).
Suppose that a sample consisting of 1 meter of half-core weighs 2700 grams and consists of fragments up to 10
centimeters in length (point #1, Figure 5.6). The sample might be crushed so that the maximum particle
diameter is 0.5 centimeters (point #2, Figure 5.6). Then the crushed material is homogenized and a portion is
taken, say one quarter of the original sample, perhaps by riffling (point #3, Figure 5.6). This smaller portion is
then further crushed and/or ground to a much smaller particle size (point #4, Figure 5.6) and again the material
is homogenized and a fraction is taken, perhaps one-quarter of the material (point #5, Figure 5.6). The
remaining 3/4ths of material (about 2700 x x = 506 grams) at this stage, might be saved as a sample reject
and the one-quarter taken will be further ground to provide a sample pulp (point #6, Figure 5.6), part of which
will be analyzed (point #7, Figure 5.6). Assuming no loss of material during size reduction the amount of
material forming the pulp is x = 1/16th of the original sample or 2700/16 = 169 grams of which 30 grams
(approximately 1 assay ton) normally will be measured out for actual analysis by fire assay. The rejects and
unused pulps commonly are retained for a specified period of time, perhaps several years, in case they are
required for quality control or other purposes. Such rejects and pulps are also available for reanalysis during
audit, due diligence and feasibility procedures or when significant errors are suspected in assay data.

QA-QC in Mineral Exploration

61

Figure 5.6: Hypothetical example of a sample reduction scheme (essential elements of a sampling
protocol) plotted on a sample reduction diagram. The step-like pattern shows alternating stages of
particle size reduction and mass reduction (circled numbers) to eventually end with a subsample of
pulp that will be analyzed. See text for details.
There is an extensive literature on the procedures to be used to optimize the sample reduction
procedures so that the errors (both bias and reproducibility) in the analyses are acceptably small and, hence, a
reported assay value is truly representative of the initial field sample (e. g., Gy, 1979; Francois-Bongarcon,
1998; Ingamells, 1974). Gys approach to designing an optimal sampling protocol is discussed in a separate
section. Regardless of how well designed a sample reduction scheme is, errors, however small or large, exist
and it is essential to monitor the quality of data obtained. Such monitoring provides the basic assurance that a
specific level of data quality is being maintained and allows the recognition of procedural difficulties as they
arise. A number of practical considerations in minimizing and monitoring subsampling errors are given in Table
2.

5.4 Pierre Gys Fundamental sampling Error


Gy (1977) developed a fundamental sampling equation that is widely used in the mining industry for
development of appropriate sampling/subsampling protocols.
1/(1/ MS - 1/ ML) = Cdx/s2
where

QA-QC in Mineral Exploration

MS[grams]
ML [grams]
C [g / cm3]

(5-1)

weight of sample
weight of lot being sampled
sampling constant

62

and

d [cm]

s (fraction)
x

maximum particle diameter


(mesh size retaining upper 5%)
relative fundamental sampling error
exponent (commonly = 3)

Note that where ML is very large relative to MS, 1/ML becomes negligible and the left hand side of the equation
reduces to 1/MS so that equation 5-1 reduces to
MS = Cdx/s2

(5-2)

with symbols as above.


The sampling constant, C, is a function of several mineralogic characteristics of the material, as
follows:
C = m.l.f.g

(5-3)

Where
m, the mineralogic parameter (g/cc) is m = [(1 xL)/xL] [1 xL) x + xLw] where
xL is grade of the valuable mineral (as a proportion rather than percent7% becomes 0.07)
x is the density of the valuable mineral in gms/cc
w is the density of the waste material (i.e., everything but the valuable

mineral)

TABLE 2: PROCEDURES FOR OPTIMIZING CRUSHING AND GRINDING


REQUIREMENTS (LONG, 1998)

QA-QC in Mineral Exploration

63

Figure 5.7: A subsampling protocol for a bulk sampling program on the Ortiz gold deposit.

l is the liberation parameter, determined according to the following scale


d/do
l

<1
1.0

1-4
0.8

4-10
0.4

10-40
0.2

40-100 100-400 >400


0.1
0.05 0.02

where d is the mesh size that retains the upper 5% of fragments and do is the effective
liberation size of the valuable material.
f is the form parameter (dimensionless) which is 0.5 for most practical situations except for highly platy
materials such as flakes of gold (for which f = 0.2).
g is the size distribution factor (dimensionless) which for unsorted material is taken as 0.25. For material sorted
into screened fractions the value of g for each screen fraction is approximately 0.5.
The estimation of the sampling constant, C, is fundamental to the practical application of Gys
sampling equation. Values can range over many orders of magnitude. For gold deposits alone, Sketchley (1998)
has shown values ranging over 5 orders of magnitude as illustrated in Table Sketch-C.bmp. Very early in the
history of deposit evaluation when little detail is known about the mineralogic characteristics of the material
being assayed a very crude approach to quality control is to use Gys safety diagram which guarantees a
precision of 5% (one standard deviation) for most materials (many gold deposits excepted). An example is
shown in Figure 5.8 where a safety line is shown defined by

QA-QC in Mineral Exploration

64

Ms > Mso = 125,000d3

(5-4)

Contains the points (0.2mm, 1g) and (2mm, 1000g). In describing the safety line, Gy (1979) states
It is valid for all geological materials such as core samples, with the exception of gold ores, irrespective of the
critical content, with the exception of very low grade ores. This rule is obviously the result of a compromise
between cost and reproducibility. When low cost is regarded as more important than reproducibility, then the
factor 125,000 may be reduced to 60,000. When on the contrary, precision is regarded as more important than
cost, which should be nearly always the case, the factor can be doubled to 250,000. The reader should
understand that what is important is the order of magnitude of the sample weight.

Figure 5.8 Examples of acceptable and unacceptable subsampling protocols.


It is essential in applying the safety diagram that the first crushing of the sample moves the size well to the left
of the safety line and that the various subsequent reduction steps do not result in the step-like pattern crossing
to the right side of the safety line. Of course, this operational procedure is equally true where a safety line has
been defined specifically for a particular mineralization.
A better approach to quality control of the subsampling protocol is to estimate the C value for the
material being assayed. This can be done by a straightforward mineralogic examination of the material in
question, in particular to obtain an approximate mineralogic parameter, m, and to determine an effective
liberation diameter so that the liberation parameter can be determined. For most materials, the other two
components of the sampling constant can be estimated by values given above.

QA-QC in Mineral Exploration

65

It is important to fully comprehend what Gys equation does and does not do. First and foremost, the
equation assumes that optimal homogenization of the sample occurs prior to each stage of mass reduction. In
other words, poor procedures can destroy the validity of the equation. Moreover, there is a common
misapprehension that Gys equation tells us something about the solid, in-place material from which the broken
material was extracted. It must be clearly understood that Gys equation applies to broken and/or fragmented
material and is aimed at optimizing the eventual assay attributed to that broken/fragmented material. Applying
Gys safety diagram or, more specifically, his equation, will not compensate for unrepresentative samples. In
the case of a half core sample, a carefully obtained analytical result tells us nothing about how representative
the value is of the second half of core. In other words, when applied to a sample of broken material derived
from solid rock, the quality of analysis obtained using Gys equation bears no relation to the real sampling
variability within the corresponding solid rock material.

5.5 Analysis

QA-QC in Mineral Exploration

66

In practice, analytical error includes a small subsampling error that arises inherently when a small
amount of a pulp is selected for analysis. Analyzing pulps in duplicate, therefore, is a standard means of
documenting analytical error. The regular use of standard reference materials and blanks is a check on the
presence or absence of bias in the analytical procedures as well as providing specific examples of the random
error inherent in the analytical procedures. Regular analyses of pulps by a second, competent lab, is a common,
useful means of monitoring for analytical bias. It is important to realize that such duplicate data do not provide
a quantitative estimate of precision of either lab, but do provide an average precision of the two labs. The
purpose of this monitoring by an independent lab is to check for bias, not to determine precision. The precision
of a lab can be determined by having that lab, itself, analyze a series of duplicate pulps. Consider a case where
duplicate analyses by two labs (subscripts a and b) have been used to demonstrate an average relative error of
5% for gold values that average 1.5g/t, that is, s = 0.075 g/t and s2 = 0.0752 = .00563. We know that variances
are additive so (sa2 + sb2)/2 = 0.00563, or, sa2 + sb2 = 0.01126. It is an interesting exercise to substitute various
values for one of the variances in this latter equation and examine the results as average precisions for both
labsthis is done in Table 5-3.
TABLE 5-3: CALCULATED PRECISIONS FOR VARIOUS ERROR DISTRIBTUIONS IN
THE RELATION sa2 + sb2 = 0.01126
sa
sa/m
Pra
sb2
sb
Prb
sa 2
Sb/m
.01

.005
.004
.003
.002

.1

.0707
.0633
.0548
.0447

.6666

.0471
.0422
.0365
.0298

13.3%

9.4%
8.4%
7.3%
6.0%

.00126 .0355

.00626 .0791
.00726 .0852
.00826 .0909
.00926 .0962

.0237

.0528
.0568
.0606
.0642

4.7%

10.5%
11.4%
12.1%
12.8%

For a relative error of 0.5, the average precision is 10%. The data of Table 5.3 demonstrate that even where the
average precision of the two labs is reasonable, the two labs can differ in precision by a factor of two.
A variety of analytical methods commonly are available for samples from an evaluation project. It is
not our purpose here to deal with the details of various methods, rather we want to indicate that different
methods might be appropriate depending on factors such as
(i) the composition of the matrix of samples being analyzed,
(ii) the compositional range being investigated, and
(iii) the quality required for the data.
Because of the wide variety of analytical methods/procedures, we are commonly faced with comparing data
obtained by more than one methodthere is every reason to be on guard for differences between methods, in
particular, biases and differences in amount of average random error.
The explorationist/mine geologist should determine an adequate analytical procedure in discussions
with analysts. Consider the case of gold analyses, as discussed by Hoffman et al (1998), for which many
analytical techniques exist. Of course, for most routine deposit evaluation, the various tried and true variants of
the fire assay techniques are relied on widely. However, it is apparent in Table (hoffman.bmp) that other
techniques might be desirable for special purposes or for particular ranges of gold content.

QA-QC in Mineral Exploration

67

Figure 5.9: Scatter plot of original Mo assays (x) versus a replicate assay (y) for an operating
molybdenum mine. Original analyses were low by an average of about 5% and were found to be the
source of the discrepancy.
The same approach applies to analyses of a wide range of metals. Springett documents three
techniques of tungsten analysis with a substantial bias among them. Schwarz et al (1983) found biased results
for one of two analytical methods for molybdenum as illustrated in Figure 5.9. Figure 5.10 is a plot of 31 pulps
analyzed for Pd by both NiS and Pb fusion procedures. In this case the NiS fusion assays are about 25% lower,
on average, than the corresponding Pb fusion procedures. Because of the scatter of data, the linear model does
not prove a significant difference between the two procedures. However, a simple t-test at the 0.05 level
indicates an average bias.

t = (1459 1905)/[(811.82/31) + (759.32/31)]1/2


= -2.23
which is outside the critical range of -1.96 to +1.96. Hence, we conclude that a bias exists between the two
methods. It is well established that rock matrix is an important control on the choice of NiS or Pb fusion in the
analysis of platinum group minerals and communication between the explorationist/miner and the analytical lab
is essential so that significant biases do not creep in to the analyses. Note that the paired t-test arrives at the
same conclusion as the classical t-test: the mean difference of 446 ppb has 95% confidence limits of 2 x
861/311/2 = +309, which does not include zero and therefore indicates the presence of a bias.

QA-QC in Mineral Exploration

68

Figure 5.10: Scatter plot of thirty one pulps analyzed by both NiS and Pb fusions for Pd.

5.5.1 Metallic Assays


Where visible (nugget) gold is present, it is often wise to analyze samples by what is known as a
metallic assay. In this procedure, the crushed/ground sample is treated to recover a heavy mineral concentrate
(containing coarse gold) that is analyzed independently in its entirety. The light fraction remaining is assayed in
the normal manner (i.e., preparation of a small proportion of pulp representative of the total light fraction) and
the two grades are combined to produce a weighted grade.
Suppose that the 2700-gram sample of Figure 5.6 is crushed to stage 4 and at which point the
subsample has a total weight of 675 grams. A heavy concentrate from the 675-gram mass weighs 7.4 grams and
is assayed in its entirety to give an assay of 147 g Au/t . This heavy concentrate will contain all the large gold
grains that are difficult to subsample in a representative manner. In addition, the remaining light fraction (675
7.4 = 667.6 grams) is assayed in the normal manner (e.g., use of 30grams) to give an assay of 1.04 g/t Au. The
appropriate weighted assay for the 675-grams of material is (7.4/675)147 + (667.6/675)1.04 = 2.64 g Au/t.
Metallic assays are used where it is difficult to subsample coarse-grained mineralization in representative
fashion, as is commonly the case with visible gold. Assay costs are increased appreciably with metallic assays
because two separate assays are involved and because both components assayed must be weighed in order to
provide the necessary factors to produce a weighted average.

QA-QC in Mineral Exploration

69

SECTION 6
6.0 MONITORING AND QUANTIFYING DATA QUALITY
6.1 Introduction
The use of duplicate data as a means of monitoring and quantifying the quality of assay data is well
ingrained in mineral exploration literature (e.g., Sketchley, 1998). The theory and background of duplicate
samples in providing quality control of assay data have been well known and in intermittent use for many years
(e.g., Burns, 1981, Thompson and Howarth, 1976; Kratochvil and Taylor, 1981). Never-the-less, the use of
duplicate samples had increased substantially in recent years, stemming largely from the Bre-X fiasco of the
mid 1990s and various legislation and guidelines that emerged as a response to that incident.
In addition to the use of duplicate sample analyses, replicate analyses of appropriate standard
materials, is an essential part of a quality control program for assays. There is no advantage to being able to
demonstrate a precision of 5 percent on pulps within a laboratory if the lab has a 10 percent high bias that has
not been documented! Consequently, in addition to the analysis of duplicate samples as a monitor on quality, it
is important to periodically analyze standard samples that have known metal abundances, including those
highly specialized standards with zero metal content, called blanks.

6.2 Dealing With Standards


6.2.1 Introduction
Standards used in assaying are materials that have been made extremely homogeneous for which specific metal
contents are known with a high degree of accuracy. They are generally prepared in substantial quantity so that
material exists for use over a long period of time. This necessity of having a relatively large amount of
extremely homogeneous material places severe restrictions on their preparation. In house standards may be
prepared without certification and their analytical results are thus suspect. Ideally, standards should be certified
as to metal content, requiring substantial rigor in homogenization and verification of metal content, as is
normally the case with organizations certified to produce such standards. Analytical evidence of the
homogeneity of a standard should be provided by those who prepared the standard.
Standards are an essential component of all quality control programs for assay data. A useful
introduction to the preparation of standard materials and their treatment to produce reported values is given
by Lister and Gallagher, 1970. They outline the crushing, grinding and homogenizing necessary to produce
materials to serve as standards for many common metals. Also they discuss the question of assigning a value to
a standardassays of the material are obtained from many labs, perhaps as many as 30, a mean (m1) and
standard deviation (s1) for the metal in question are determined and all values more than 2 standard deviations
from the mean are removed so that a new mean (m2) and standard deviation (s2) are obtained. These latter
values (m2 and s2) provide the estimate of metal content and confidence limits can be placed on the mean using
the standard error of the mean.
Two categories of standards are used routinely for assay quality control purposes of
exploration/evaluation projects(1) certified reference materials (CRMalso known as standard reference
materials, SRM) with international distribution and local commercial standards of limited distribution and
(2)property standards. CRMs are constructed in a rigorous manner comparable to that described by Lister and
Gallagher (1970) using a range of international, reputable labs. Local commercial standards, prepared by
certified professionals, can also be CRMs but generally are developed using fewer labs than in the preparation
of internationally-verified CRMs. Property standards are commonly prepared using rock material representative

QA-QC in Mineral Exploration

70

of that found on the property under evaluation; hence, there should be no serious matrix differences between
the property standard and the great majority of samples analyzed.
Standards are used principally as a check on lab accuracy. Standards are analyzed sequentially along
with routine samples and the analytical results for each standard should be plotted sequentially so the any
systematic variations over time can be identified readily and dealt with where necessary. Significant variations
from the recommended values of certified reference materials indicate that bias is present in the laboratory
procedure and rectification is in order. Some of the idealized patterns that arise and their explanations are given
in Figure 6.1.
The statistics for repeat analyses of two local commercial standards, UTM-1 and WPR-1, are given in
Table 2 and provide an indication of how analytical data should be summarized as well as the level of
variability that can arise.
TABLE 6-2: SUMMARY STATISTICS FOR TWO COMMERCIAL STANDARDS ANALYZED AS
PART OF A DEPOSIT RE-EVALUATION PROJECT
Standard
UMT-1 20

Pt(ppb)
Avg.
sd

CV

125.2

8.2

1.8

6.6%

se
108.7

Pd(ppb)
Avg.
sd

CV

6.8

1.5

6.3%

se

WPR-1 21
276.4 10.8
3.9%
2.4
231.9 8.3
3.6%
1.8
n = number of analyses, avg = average, sd = standard deviation; se = standard error; CV = coefficient of
variation

Figure 6.1: (a) Time lots of analytical data for two Mo standards, E and D. The expected (known)
values with which these replicates should be compared are 0.2347+0.0184 for E and 0.1160+0.0038

QA-QC in Mineral Exploration

71

for D. Note bias in analyses of both standards. (b) Idealized time plot to illustrate interpretation of
patterns. In all cases, assume an expected value of 1.5 ppm Au. Results 0-10 are accurate and
precise except for one outlier; results 11-20 are accurate but not precise; results 21-30 show a
systematic trend that warrants investigation; results 31-40 are precise but not accurate; results 41-50
are both precise and accurate.
Note that repeat analyses of standards of all types provide data that can be used to estimate error quantitatively.
In the case of the two metals summarized in Table 6-2 the error (dispersion) is reported as standard deviation
(sd) and coefficient of variation (CV). Recall that multiplying the CV by 2 gives the average precision, i.e., for
Pd in UMT-1 the precision is Pr = 2 x 6.3 = 12.6%. Where standard are recognizable as such to the analyst,
precisions determined from their analyses are apt to be optimistic relative to precision of unknown samples.
In-house standards commonly are not analyzed widely enough that the true metal content is known with the
same confidence as for certified reference materials. If the in-house analyses are obtained with sufficient quality
control involving the use of known standards, however, they can still provide a reasonable control for
monitoring bias as well as precision.
The analytical results for the standards of Table 6-2 are acceptable because of the following features:
1.
2.
3.

examination of histograms of the data (not shown) indicates that they are distributed symmetrically about
the mean values
the standard deviations (SD) are small relative to the corresponding mean value. This is also shown also by
the small coefficients of variation (CV) given by the formula 100*sd/(avg).
the 95 percent confidence limits for the mean range of values (avg + 2*SDse) define a very short range that
is relatively small as a percentage of the mean.

Practical use of the repeat analyses of a standard require sufficient replicates to show whether or not various
types of trend exist; such trends may be obvious, as illustrated in Figure 6.1, or might be less obvious visually
and require statistical testing. Consider a typical example of 58 sequential analyses of a Pt standard (figure 6.2).
A variety of statistics for these data are given in Table 6-3. Consider the data with outlier removed and divided
into two
TABLE 6-3: SUMMARY STATISTICS FOR REPLICATE ANALYSES OF A Pt STANDARD
N

Range

Remarks

58
35.78 13.89 19-116 ppb
Total available data
57
34.37 8.914 19-60
Data less one outlier (4th analysis)
22*
36.50 10.06 26-60
First 23 results less outlier
35*
33.03 7.976 19-58
Final 35 results
*Division into subgroups is arbitrary and is based on a visual examination of a graph of the data (grade versus
sequence no.).
subgroups, 22 early analyses and 35 later analyses. These two groups can be tested for bias. First determine an
F-value: F = 10.062/7.9762 = 1.59. This is less than Fcrit = 1.99 for = 0.05 and df = (34, 21) derived from
tabulations in statistical texts, so the two variabilities cannot be distinguished. A t-value can be determined as
follows:
t = (33.03 36.50)/(10.062/35 + 7.9762/22)1/2 = 1.44. This value compares with tcrit = 1.96 for = 0.05 and df =
55 so the two means are indistinguishablein other words, no bias can be identified.
A common practice in dealing with analytical data for a standard is to view the data as a histogram, as
shown in Figure 6.2 for the data summarized in Table 3. In this example, the outlier stands out clearly from the
well-defined bell shape of the remaining data and serves as pictoral justification for omitting the outlier value in

QA-QC in Mineral Exploration

72

estimating a representative mean value of the replicate analyses. The histogram, however, is not useful for
recognizing subpopulations of the data, that are more likely to be evident on a probability plot.

Figure 6.2: Fifty-eight sequential analyses of a Pt standard. Note the outlier. Also, note the possibility
that early analyses are higher, on average, than later analyses. See text for discussion.

Figure 6.3: Histogram of the Pt analyses plotted in Figure 6.2.

6.2.2 Blanks

QA-QC in Mineral Exploration

73

Blanks are samples or pulps that are known to contain negligible (effectively zero) contents of an
element or elements (metals) for which assays are being determined. They are used for two main purposes, (1)
to monitor contamination during subsampling and (2) to monitor contamination in the analytical environment.
Occasionally blanks are inserted at various places in a sample sequence to check that samples have been kept in
order throughout the various subsampling and analytical processes. Long (1998) suggests that in low grade ores
blanks are not particularly effective for this latter purpose because many of the low-grade samples can be near
the analytical detection limit.

6.2.3 Monitoring Subsampling Contamination


Blanks (fragmental rock material) can be inserted at the stage of sample-taking or at the start of the sample
reduction procedures. Ideally, such blanks provide a monitor of sample contamination occurring during the
various subsampling steps (esp. in crushing, grinding and pulverizing equipment). One useful procedure is to
submit samples in a specific order to the subsampling system and in the case of obvious high grade samples
(e.g., visible gold identified in a sample), insert a blank sample to follow some or all high grade samples, as
results indicate. Blanks should not be identified as such to the laboratory conducting the assaying although, it is
possible that as time goes on, the laboratory will come to recognize the material as a blank.
Material used for blanks requires some attention to ensure that there is no nugget effect for the element(s)
involved. Long (1998) suggests that barren, unaltered volcanic rock is one of the best materials for blanks
because they are relatively homogeneous. Of course, it is important to ensure by sampling and analyses that the
material really is devoid of the metals of concern. Other commonly used materials for blanks for many metallic
elements are nepheline syenite and granite, perhaps obtained from industrial quarries. In general, blanks should
approximate the matrix of the rocks being assayed if this is possible although this is not as essential a
requirement for blanks as it is for standards.

6.2.4 Monitoring the Analytical Environment


Blanks (i.e., pulps with zero content of specific metals) normally are included with each analytical
batch in commercial laboratories. The purpose of these blanks is to identify contamination during the analytical
process. Contaminated blanks of this origin can arise because of impure reactants used in the process, dirty
vessels (glassware, crucibles, etc.), and because of spitting (violent bubbling) that might be caused by release
of gas in certain samples (e.g., sulfide-rich samples releasing SO2, breakdown of carbonates, etc.), which, in
fire assaying, can cause material in one crucible to splash into adjoining crucibles. This source of contamination
can be kept to a minimum during sample description when samples rich in suspect volatile components that
lead to spitting can be identified and treated appropriately. For example, pulps might be heated to deplete the
sample of its volatile component prior to analysis. Of course, weights before and after devolatilization must
be recorded so that the eventual assay on devolatilized material could be corrected to a volatile-bearing
equivalent. Blanks of this type can be included with assay results reported by commercial labs and, if this is
not the case, should be requested by clients. Assays of such blanks are an effective monitor on the internal
quality of a laboratory, of course, they are known to the laboratory.
Lab blanks should report analytical values <less than detection limit>. Where contamination has
occurred (e.g., spattering in an assay furnace) higher assay values are obtained for blanks and represent
concern about the quality of data in that particular batch. However, a client should routinely submit
unknown blanks of coarse rock material to the lab as part of an exhaustive quality control protocol. The
reason for such blanks is to provide a check on possible contamination during the subsampling routine to
which all samples are subjected. Such blanks should be tested in advance to ensure that they contain
negligible (unmeasurable) amounts of the elements in question and should report assay values at or near
the detection limit. High values reported for such blanks could arise by contamination during either
subsampling or analysis.

QA-QC in Mineral Exploration

74

6.3 Laboratory QA/QC Procedures


While it is not the purpose of this manual to describe laboratory quality control procedure in detail it is
important that the explorationist has a clear understanding of the general types of quality control data produced
internally by labs. The need for and practice of quality control have been well established in many commercial
laboratories for some considerable time. Most commercial labs analyze blanks and standards as a routine part of
their operating procedures, and with all individual batches of samples submitted for assay. Today most labs
supply this in-house QC information to their customers along with the customers assays. If these data are not
supplied with the clients assays, the results of in-house QC analyses that were generated with the clients assay
data should be requested.
Analytical data for blanks are easily examined in tabular form and evaluated relative to the detection limit of
the metals involved. There are two common ways of monitoring replicate analyses produced at regular intervals
by commercial labs. The first and essential method is to plot the analyses versus date of analysis to examine any
systematic variations that might be present as illustrated in Figure 6.1. This figure shows a variety of patterns
that might occur on a value-time plot, including systematic trend, variable scatter, and constant bias. A specific
example of a value-time plot for four molybdenum standards is shown in the upper part of the figure. These
same molybdenum data are also shown as a value-value plot (i.e., differences from the known standard value
versus the corresponding known assay values) in Figure 6.4. This latter diagram shows the analyses for the 3
highest standards to be, on average, about 5% lower than their true values. In this particular case, the analytical
procedure was in error and a new procedure had to be imlemented.
The internal QC data generated by a lab in the course of conducting assays for a client should be supplied to the
explorationist for his/her examination. Such data are now supplied routinely by many commercial laboratories
but if they are not, the data should be requested.

Figure 6.4: Concentration-difference plot for 4 molybdenum standards. The ordinate is a difference
between analyses of the standards and their accepted values; abscissa is concentration. Two of
these correspond to the time plot of Figure 6.1a.

6.4 Duplicate QC Data During Exploration/Evaluation

QA-QC in Mineral Exploration

75

6.4.1 Sampling Plan for Quality Control


Quality control duplicate sampling should be integrated into the normal sampling program. Too often
this is not the case and QC sampling, unfortunately, is appended as an afterthought, in some cases long after the
original sampling program is completed. The main reason why QC sampling should be done at the same time as
the main sampling program is so that the QC samples are taken and treated under the same conditions as the
main samples that will provide the basis for deposit evaluation. After all, it is the quality of that main data base
that is in question. QC samples taken at a later time might be affected by natural causes (e.g., alteration of
exposed core samples), might be analyzed by a different lab, by a different analytical method or with different
subsampling protocols or might be contaminated in some fashion. All of these differences can lead to
substantially different errors than existed in the original data. In an exploration project duplicate data should be
collected in a systematic fashion, starting with the very first samples that are sent for assay.
Quality control protocol should involve the following: (1) data editing, (2) analyses by Principal lab, (3)
analyses by Check lab and (4) display and interpretation of data. Each will be discussed below.

6.4.2 Data Editing


Data editing (verification) should start at the outset of an exploration program. Analytical results for blanks,
standards and duplicate samples of all types should be closely monitored and verified along with the bulk of the
data being collected. Routine data editing procedures involve checking outliers and abnormal values in
duplicates. This procedure can be aided significantly by the use of scatter diagrams and histograms.

6.4.3 Principal lab


The principal lab produces the great bulk of data to be used in evaluation of a mineral deposit. Consequently,
the quality of the lab should be known and monitored by the use of duplicate samples. In general, a higher
proportion of samples must be taken for small projects as opposed to major projects, in order to provide enough
duplicate data for meaningful statistical analysis, perhaps about 1in 20-30 samples in small projects to as few as
1 in 100 samples for major projects. The aim is to have a sufficient number of samples for meaningful
quantitative interpretation as well as providing an adequate monitor on routine analyses. Duplicates should
include sufficient sample duplicates (e.g., second half of drill core), coarse reject duplicates and pulp
duplicates such that the magnitudes of sampling, subsampling and analytical errors can be determined. The
minimum number of sample duplicates required depends somewhat on analytical results but commonly a
minimum of 40 to 50 duplicate samples will suffice to quantify the sources of error. For each of these duplicate
half cores, a reject sample from the original (or second) half core should be analyzed in duplicate. Note that at
this point, assuming 50 samples are being considered, the necessary QC duplicate analyses by the principal lab
total 150 (i.e., 50 duplicate half cores and 50 rejects with each pulp being analyzed in duplicate). These 150
analyses are in addition to the 50 original analyses and, of course, appropriate hidden blanks and standards with
each analytical batch. Note that the industry practice of analyzing -cores, justified in that a permanent record
of the sample interval remains, will overestimate the sampling variability. Sampling, subsampling and
anlaytical protocols can now be evaluated in the light of a quantitative knowledge of the magnitude of these
error and appropriate actions can be taken to rectify problems as discussed in section 5.

6.4.4 Check Lab


The main purposes for using a second laboratory for some duplicate analyses are:
1.

To allow for more limited use of relatively expensive certified reference materials.

QA-QC in Mineral Exploration

76

2.
3.

To monitor the quality of original data with compositions between those of certified reference materials or
internal standards.
To monitor potential analytical problems in the principal lab.

Duplicate pulp analyses involving data from two labs are commonly, but incorrectly, used to estimate precision.
It should be apparent that any so-called precision obtained by comparing analyses from one lab with analyses
from a second lab will be intermediate between the true precisions of the two labs. If the precision of neither of
the two labs is known for data consistent with the duplicate data, no quantitative estimate of the precision of
either lab is possible. If the precision of one of the labs is known for data consistent with the duplicate data, it is
possible to estimate the precision of the second labthis arises because the dispersion (as a variance) about the
RMA line is the sum of the dispersions/analytical error (as variances) for the two labs, i.e., sa12 + sa22 = sd2
(recall example in Table 3, Section 5, Analysis).
About 3 in every 40 to 100 duplicate pulps (depending on the scale of the project) should be sent to an
independent lab as a routine monitor on the principal lab. In addition, pulps of property standards in use by the
principal lab should be added to the routine sample duplicates sent to the check lab.

6.5 Interpretation of Results


Too often quality control results are collected and displayed in reasonable fashion but without a
rigorous interpretation. In some cases, interpretations are incorrect, as in the case of reporting precision based
on a comparison of data from two different labs. In other cases, key information is missing that would allow a
comprehensive quantitative interpretation. All too often, the Thompson-Howarth method for estimating
precision is applied inappropriately, specifically, when bias is present in the data. The preceding
recommendation on quality control data collection (blanks, standards and duplicate analyses) by the principal
lab and a check lab are designed to provide all the necessary quality control information for a comprehensive
quantitative interpretation while minimizing the additional sampling and analytical costs related to quality
control.

There is no such thing as too much data in most practical quality control situations. Consider 30 drill holes
that intersect a narrow vein such that there is only 1 sample per drill hole. This provides 30 samples, perhaps 10
of which should be done in duplicate as described later. Now suppose that 30 drill holes, each of 100m length
in a massive, mineralized epithermal gold zone are sampled as 2-meter lengths of half core. Each hole
represents 50 samples for a total of 1500 samples. Of these 1500 samples about 150 should be analyzed in
duplicate and should include about 50 duplicate half cores, 50 duplicates generated from reject material and 50
duplicate pulps. In addition, 50 pulps should be duplicated by an independent, reputable lab. These duplicate
samples are in addition to blanks and standards that are included with analytical batches. Labs traditionally
insert their own blanks and standards but the client should also submit blind (i.e., unknown to the lab) blanks
and standards. Moreover, the client should receive the labs results for the lab blanks and standards, as well as
detailed information on the reported values for these materials.
Following QC data verification, an appropriate approach to interpretation includes
1.

A general examination of the data for irregularities, outliers, subpopulations that might require separate
interpretation, etc., using histograms, probability plots and scatter (x-y) diagrams.

2.

Consider outliers separately.

3.

Examine scatter diagrams of various sets of duplicate data to identify influential data (especially a small
number of scattered high values that might control a linear model but might bear no particular relation to
the error characteristics of lower values. In general, divide the data into appropriate subgroups to be

QA-QC in Mineral Exploration

77

evaluated separatelybase this division on variations in spread about the y = x line and on data density as
a function of concentration.. Divide into ranges using the average of pairsexception is the case where
there is a change in method at a particular value.
4.

Examine sequential analyses of standards on a value-time plot. Such a plot should be updated and reviewed
as each new batch of data is received. The mean and spread of assay values reported for the standards
should be compared with the reported (or best) value for the standard.

5.

For each appropriate subgroup, examine duplicate data from the check and principal labs for bias by
comparing a linear model with the y = x line. Quantify the bias, if present, for each subgroup. An average
precision for the two labs can be determined but is not always necessary and can be misleading in that the
uninitiated might incorrectly assume that the estimated precision is a precision for the data generated by the
principal lab.

6.

For the duplicate samples by the principal lab (e.g., duplicate half cores), evaluate the results together with
the duplicate pulp analyses for a reject sample of the corresponding original half cores. Determine the
magnitudes of sampling, subsampling and analytical errors.

6.6 AN EXAMPLE OF A COHERENT DATA SET


6.6.1 Introduction
As a general example of data evaluation consider data that form part of a the evaluation of a Pt deposit at an
advanced stage of exploration and consists of various duplicate analyses for 19 samples that span the range of a
much larger data base. These samples formed part of an audit of a platinum-group-metal (PGM) deposit. Here
we will consider only the Pt values. The data consist of an original analysis and 4 analyses obtained as part of
the due diligence study.The variables are as follows:
RLPTOPPMOriginal Pt analysis of half core by original operator
ACOREPT1Check analysis of pulp from first half of core
ACOREPT2Second check analysis of pulp from first half of core
ACOREPT3Analysis of pulp from second half of core
ACOREPT4Second analysis of pulp from second half of core
In obtaining these analytical data the half cores were completely ground and reduced to pulp so that ideally
there is no subsampling error to consider.

6.6.2 Estimating Analytical Error of the Due Diligence Lab


The analytical precision of the due diligence lab can be determined for two different sets of paired data,
AARLPT1 vs AARLPT2 and ACOREPT3 vs ACOREPT4, as shown in Figures 6.5 and 6.6.

QA-QC in Mineral Exploration

78

Figure 6.5: Scatter plot of duplicate Pt analyses of pulps from first half of core, by due diligence lab.
The first estimate of analytical precision obtained from the data of Figure 6.5 is
2sa2 = 265.62 = 70,543.4 so that sa2 = 35,271.7 and sa = 187.8
A second estimate of analytical error can be obtained in the sameway from data in Figure 6.6, that is, 2sa2 =
222.62 so that sa2 = 24,775.4 and sa = 157.4 . These two estimates can be averaged (by averaging the variances)
to give a best estimate of the average analytical error of the due diligence labe based on these data alone; that is
average sa2 = (35,271.7 + 24,775.4)/2 = 30,023.5 to give an average sa of +173.3 . This average analytical error
can be transformed into an average precision based on an average grade of 1172 ppb. The average relative error
sa/m = 173.3/1172 = 0.148 to give an average precision of 200x 0.148 = 29.6%.

QA-QC in Mineral Exploration

79

Figure 6.6: Scatter plot of duplicate Pt analyses of pulp from second half of core by due diligence lab.

6.6.3 Estimating Analytical Error of the Original Lab


Two estimates of the analytical error of the original lab can be made by comparing RLPTOPPM vs
ACOREPT1 and RLPTOPPM vs ACOREPT2, the scatter plots of which are shown in Figures 6.7 and 6.8
respectively.
Consider Figure 6.7, for which
Sdd2 + sR2 = 438.42 = 187.82 + sR2
To give sR2 = 192,194.6 35,258.8 = 156,925.7 so that sR = +396
A second estimate can be determined from data of Figure 6.8 in the same way to give an sR = 357.1. The
average analytical error for the original lab is [(156,925.7 + 357.12)/2]1/2 = 149,577.9 = +386.8.
For an average grade of about 1172ppm, this gives a relative error of 0.33 and an average precision of 66%,
substantially worse than the due diligence lab.

QA-QC in Mineral Exploration

80

Figure 6.7: Scatter plot of original core analysis versus first same pulp analysis by due diligence lab.

Figure 6.8: Scatter plot of original pulp analysis and second same pulp analysis by due diligence lab.
Data of the type displayed in Figures 6.7 and 6.8 are commonly used incorrectly to estimate precision. The fact
is, that if a precision is determined from data such as that of Figure 6.7 the resulting value is an average
precision of the two labs involved and one could be much worse that another. Let us calculate the precision
illustrated by Figure 6.7 as follows:

QA-QC in Mineral Exploration

81

2sa2 = 438.42 = 192,194.6, thus sa = +310


This average error gives an average relative error of 310/1172 = 0.265 and an average precision of 53%.
Fortunately, we know from calculations above that the due diligence lab has a precision of about 30% whereas
the original lab has a worse precision of about 66%.

6.6.4 Sampling Error


In this example, the entirety of each sample has been pulverized to a pulp so ideally there is negligible
subsampling error. Of course, one must recognize that there are elements of subsampling involved in what we
have called analytical error, above. Here for practical purposes we define all the error other than analytical
error, as sampling error, thus
sa2
analytical
error

ss2
=
sampling
error

st2
total
error

We know sa2 from the calculations we have done above and we have four separate estimates of st.
Consequently, we can calculate an average sampling error. The 4 estimates of st arise from the following
pairings which all involve a comparison from one half core to the other half core.
AARLPT1 vs ACOREPT3
AARLPT1 vs ACOREPT4
AARLPT2 vs ACOREPT3
AARLPT2 vs ACOREPT4
These comparisons are shown in Figures 6.9 to 6.12.

Figure 6.9: First half-core vs. second half core, ACOREPT4 vs AARLPT1

QA-QC in Mineral Exploration

82

Figure 6.10: First half of core vs second half-core, ACOREPT3 vs AARLPT1

Figure 6.11: First half-core vs. second-half core, ACOREPT3 vs AARLPT2

QA-QC in Mineral Exploration

83

Figure 6.12: First half-core vs. second half-core, ACOREPT4 vs AARLPT2


Consider the calculations pertinent to Figure 6.9:
sa2 + ss2 = st2 that is, 173.32 + ss2 = 789.72
or ss2 = 623,626 30,033 = 593,593 from which ss = 770.5
The sampling variance accounts for 95% of the variability and analytical error is the remaining 5%. Even as a
standard deviation the sampling error (770.5ppb) dwarfs the analytical error (173.3ppb).
It is important to appreciate that the sampling error is actually a measure of inherent geological variability that,
for the most part, and represents real differences. This inherent or geological variability can be reduced to some
extent by (a) sawing half cores instead of manual splitting, (b) increasing the sample mass by increasing the
drill hole diameter, (c) increasing the sample mass by increasing the length of half cores representing a sample.
Even large a improvement to the analytical precision (i.e., large reduction in random error of analysis) has
minor impact on the overall total variability.
In cases such as this, there is no obvious subsampling error. The impact of producing a pulp from the entire
sample, however, generally has the effect of increasing the analytical error relative to what it would have been
had a smaller amount of material been reduced to a pulp. This arises in part because the larger pulp is more
difficult to homogenize (for extracting the small amount to be analyzed) than is a smaller amount of pulp.

QA-QC in Mineral Exploration

84

6.7 SOME COMMONLY ENCOUNTERED SITUATIONS


6.7.1 HALF-CORE VERSUS QUARTER CORE
The Problem

Figure 6.13: Two hypothetical grade distributions centered on the same mean value, a wide
dispersion representing quarter-core data and a narrow dispersion representing half-core data.
It is an established fact that sample variance (i.e., sample grade variability) is a function of sample
size, generally increasing as the sample size decreases (Journel and Huijbregts, 1979). In simple terms, this
means that a data set based on physically small samples will have more high and low values than will a data set
based on physically larger samples. The principle is illustrated in Figure 6.13 where two grade distributions
(ideal normal curves) are centered on the same mean valuesmaller samples produce distributions with
broader dispersion. Note the impact if a cutoff grade is applied to the two curvesthe grade distribution for
smaller samples produces high-grade values that do not exist in larger volumes and the forecasted average
grade is biased high.
This principle of increasing grade dispersion with decreasing sample size becomes a matter of concern in
auditing and due diligence work that involves resampling as a check on the validity of data. The common
practice of quartering core (i.e., halving some of the remaining half cores) leads to the comparison of assays for
two sets of data of differing sample support. In many practical cases the differences in dispersion (standard
deviation) are of the order of a few percent and are not a matter of concern. In other cases, especially those with
a high nugget effect and/or a strongly skewed distribution, the use of quarter cores can lead to very large
differences between quarter-core values and corresponding half-core values.
Part of a property evaluation in a New Brunswick PGM prospect produced a set of -core assays in
the early 909s. A few years later the remaining half cores were themselves halved to produce quarter cores that
were also analyzed for PGMs (Table 6.4). These data have a mean value of about 78ppb Pt and standard
deviations of about 60.6 and 101.4 for half-core and quarter-core data respectively.

TABLE 6-4: COMPARISON OF STATISTICS FOR1/2CORES AND


CORRESPONDING 1/4CORES FROM A PGM SHOWING IN NEW BRUNSWICK
.
.

Pt
M

Au
m

Pd
m

-core
-core

76.2
78.5

101.4
60.6

21.0
21.4

33.3
23.3

274.4
286.0

379.5
214.7

QA-QC in Mineral Exploration

85

Figure 6.14: Idealized illustration of a comparison of -core and -core assay distributions
discussed in text.
Consider the impact of applying a cutoff grade of xc = 100 to both the -core and the -core data sets. It is
evident from Figure 6.14 that the -core data above cutoff will produce an average grade substantially higher
than will the -core data above cutoff grade
It is evident from these data that comparison of quarter core with half core assays is not a fair comparison,
particularly for the very high values of -core which are clearly of special interest but for which the -core
data are likely to be biased high relative to the -core results.

A Solution
A fairer way to compare quarter core results with half core analyses in this case is
to make 2-meter composites of the -core values and compare them with each of the
corresponding 1-meter -core data. Values of composites, can be compared first
with the upper half core values and then with the lower -core values.

QA-QC in Mineral Exploration

86

6.7.2 DATA SPANNING A VERY LARGE RANGE (Perhaps several orders of


magnitude)
The magnitude of errors in analytical data has been shown to be variable as a function of concentration. In
simple terms this means that high assays are likely to have a different error pattern than are low assays. This
situation is accentuated where data span a very large range, say several orders of magnitude, as is often the case
with precious metal deposits. Dealing with such data can be superficial or ambiguous: it is unwise to determine
a single precision figure for a metal, rather, the data should be viewed as a scatter diagram and a sensible
division made into subgroups that can be evaluated separately for error. In many cases, data that span several
orders of magnitude have been obtained using more than one analytical method. One important guideline for
forming subgroups is to use concentration ranges that show comparable scatter about the y=x linethe
threshold can be applied to the average of paired values in order to classify each pair of analyses as being above
or below the threshold. Within each subgroup an average relative error can be determined as representative of
the subgroup. However, it must be recalled that the average absolute error will be a larger percentage of low
values in the subgroup than for high values.
The data can be evaluated by examining histograms and probability plots in addition to scatter diagrams, for the
identification of multiple subpopulations. Because such subpopulations commonly reflect differing geological
styles of mineralization, each can be investigated separately for purposes of identifying and quantifying error.
Where various concentration ranges have widely different density distributions of data, subgroups can be
formed based on density of data. This procedure is a good general approach regardless of whether or not the
data span several orders of magnitude.

QA-QC in Mineral Exploration

87

6.8 ANALYSES BY ALTERNATIVE ANALYTICAL METHODS


Introduction
A choice of analytical methods commonly exists for determining the abundance of most elements of
potential economic value in the mining industry. In the case of gold, for example, data are commonly produced
by so-called geochem analyses (generally reported as ppm) or the method of choice, fire assay (generally
reported as g/t). Even a tried and true method such as fire assaying for gold has its variants that might lead to
different metal contents being reported (cf. Hoffman et al, 1998). Consequently, it is important that an
appropriate analytical method be used to obtain assay data, including the associated quality control data. One
way of assuring this is to use standards with similar matrix to the general material being assayed; in which case,
an analytical method that reproduces such a standard well should also provide acceptable assay values of the
sampled material. In addition, selection of an appropriate analytical technique should be done in consultation
with a qualified, experienced analytst.
Examples will be used to illustrate the nature of problems that can be encountered, depending on the
choice of analytical method, (a) two PGE analytical methods, (b) 3-acid and 4-acid extraction procedures for Ni
in silicate ores, and (c) chemical versus counting methods for uranium.
Example 1: Two Analytical Methods for PGE Analysis.
In this example, existing pulps were sampled twice, once for platinum group element analyses by a Pbcollection method of fire assay, and once for analysis by a Ni-sulfide-collection method of fire assay. The
duplicate Pd analyses are illustrated in Figure 6.15. In this figure the linear model itself does not prove bias
but a variety of other statistical methods do, including the t-test and paired t-test. In addition, only 6 of the
duplicate analyses plot above the y = x line, that is, 6/31 = 0.194 or 19.4%, a most improbable distribution of
plotted points.

Figure 6.15: A comparison of two analytical methods of Pd analysis applied to pulps.

Example 2:

QA-QC in Mineral Exploration

88

In general, it is important to appreciate whether a metal analysis is total metal or a partial extraction.
For example, nickel can occur in the sulphide form as well as being tied up in silivate lattices; in sulphide-rich
deposits only the sulphide form is generally of interest because the Ni tied up in silicate lattices is not
recoverable. Consequently, an appropriate method of analysis is required. An example is illustrated in Figure
6.16 where a 4-acid digestion produces a total metal analysis whereas, a 3-acid digestion approximates sulphide
Ni. The figure clearly shows a fixed bias of about 75ppm Ni that represents an average Ni-content of olivine in
this example.

Figure 6.16:

Example 3
This example involves a comparison of instrumental analyses for uranium versus chemical analyses.
Abundant data are illustrated in Figure 6.17 where two types of bias are evident. The first is for low values
where decay equilibrium did not prevail, the

QA-QC in Mineral Exploration

89

Figure 6.17: A scatter diagram of uranium analyses by an instrumental counting procedure and
chemical analysis.
second is for higher values where a linear trend to the data is apparent above the y = x line. Generally, in such
cases, the chemical analyses are taken as being closer to the true values.

6.9 LACK OF CONSISTENCY IN DUPLICATE DATA


Lengthy periods of time between original analyses and duplicates can lead to ambiguous discrepancies
that are costly and hinder a clear understanding of data quality. Weathering of core could either change the
specific gravity of altered core or could selectively leach valuable elements, particularly where core is stored
outside and warm, humid climates prevail. In addition, different subsampling and analytical procedures for the
two stages of analyses could lead to differences between check analyses and the original data. Consequently,
unnecessary and even unknown errors can arise at levels of sampling, subsampling and analysis. These
problems, while often accentuated in tropical areas, may well develop elsewhere. Hence, it is wise to conduct
duplicate sampling as part of a quality control program at the same time as the original samples are taken rather
than later. As an example, consider the duplicate Cu data illustrated in Figure 6.18 where original analyses of
drill core samples from a tropical environment are compared with duplicate half core samples taken 4 months
later from core stored in open air.

QA-QC in Mineral Exploration

90

Figure 6.18: Scatter diagram of first half-core analyses for Cu (Cu_ppm) versus values for facing halfcores obtained about 4 months later in a hot humid climate.
There is a complex relation between the paired data including evidence that high values appear to have been
overestimated by the initial data relative to the duplicates. This result could arise because of a sampling bias at
the time of initial sampling, but could also have arisen because of some selective leaching of Cu in the time
interval between the two samplings. Ni values also show a significant bias with the original data high relative to
the later data (Figure 6.19). It is interesting to note that original precious metal values (Pt, Pd, Au) are almost
perfectly reproduced by the duplicates 4 months later. This leads one to believe that a slight preferential
leaching of Cu and Ni took place during the 4 month interval.

Figure 6.19:

QA-QC in Mineral Exploration

91

Substantial disparity can arise in attempting to verify very old assay data, with say, tens of years between
original analyses and duplicate analyses Old data generally lack the highly organized quality control
procedures required today. The problem of potential alteration of old half-cores, as illustrated in the previous
example, is accentuated because of the time interval involved. Furthermore, the conditions of original analysis
and subsampling protocol may be impossible to reconstruct from available records and information. This is a
general problem that is encountered in audits and due diligence work.
Consider the example of Figures6.20 and 6.21, for which duplicate half core analyses for Ni, taken about 30
years after the original analyses, were obtained. In this case a proportional bias is evident, with the newer
analyses about 10% lower than the original data, on average. In such a case it was essential to have confidence
in the newer analyses because it was not possible to reconstruct the subsampling and analytical protocol for the
original data. Consequently, the newer data required substantial checking by independent labs as well as by
other analytical methods than those used initially. In addition, it was essential to evaluate the quality of core and
how it might or might not have been affected during the 30-year period it was in storage. In this particular case,
the environment was cold temperate, the core showed little evidence of alteration and, on investigation, the
original lab was thought to be suspect.

Figure 6.20: Scatter plot of 1105 original half-core analyses versus the second half-core analyses
obtained after a 30 year interval.

QA-QC in Mineral Exploration

92

Figure 6.21: Expanded scatter plot of the concentration of relatively low valued data of Figure 6.20.
The data of Figure 6.20 do not indicate a particularly serious problem. However, if the relatively few high
values are considered influential and removed, the remaining group of abundant lower values indicate an
average bias of more than 10 percent, with the original assay high relative to the later assays. Clearly, this is a
serious problem in what was perceived to be a low grade deposit.

7.0: REFERENCES
AGI, 1972, Glossary of Geology, American Geological Institute, Washington, D. C. 805 pp. Plus appendices.
Agterberg, F. P., 1974, Geomathematicsmathematical background and geo-science applications; Elsevier Sc.
Pub. Co., Amsterdam, The Netherlands, 596 p.
Annels, A. E., 1991, Mineral deposit evaluation, a practical approach; Chapman and Hall, London, 436 p.
Anon.

Mineral Tenure Act of British Columbia

Bentzen, A., and A. J. Sinclair, 1993, P-RES, a computer program to aid in the investigation of polymetallic ore
reserves; Tech. Rept. MT-9, Mineral Deposit Researach Unit, Dept. of Earth and Ocean Sciences, The Univ. of
British Columbia, Vancouver, B. C. (includes diskette), 55 p.
Blackwell, G. H., 1998, in Vallee, M., and A. J. Sinclair (eds.), 1998, Quality assurance, continuous quality
improvement and standards in mineral resource estimation; Exploration and Mining Geology, v. 7, nos. 1 and 2,
pp. 99-106.
Burn, R. G., 1981, Data reliability in ore reserve assessments; Mining Mag., October, pp. 289-299.
Cabri, L. J., 2002, Overview of platinum-group minerals (PGM) and Alaskan-type deposits (abstract);

QA-QC in Mineral Exploration

93

Abstracts, Cordilleran Roundup (British Columbia and Yukon Chamber of Mines), Vancouver, January 21-25,
p. 43-44.
Cornish, E. C., 1966, Sampling ore deposits; Mineral Industries Bull., Colo. Sch. of Mines, v. 9, no. 2, 14 p.
CSA, 1992a, ISO 9000Quality management and quality assurance standardsguidelines for selection and
use; Canadian Standards Assoc., 16 p.
CSA, 1992b, ISO 9001Quality systemsquality assurance in design/development, production, installation
and servicing; Canadian Standards Assoc., 16 p.
CSA, 1992c, ISO 9004Quality management and quality system elementsguidelines, ISO-9004; Canadian
Standards Assoc., 16 p.
Cumming, J. D., 1980, Diamond drill handbook; J. K. Smit, Toronto, Canada, 547 p.
David, M., 1977, Geostatistical ore reserve estimation; Elsevier Sc. Pub. Co., Amsterdam, The Netherlands,
364 p.
Davis, J. C., 1986, Statistics and data analysis in geology; John Wiley and Sons, Inc., New York, 646 p.
Dent, B. M., 1935 or 1937, On observations of points connected by a linear relation; Proc. Physical Soc.
London, v. 47, pt. 1, p. 92-108.
Farquharson, G., H. Thalenhorst and R. von Guttenberg, 1997, Busang projecttechnical audit for Bre-X
Minerals Ltd., Interim Report, May 3, 54 p. plus photographs and appendices.
Fletcher, W. K., 1981, Analytical methods in geochemical prospecting; Handbook of exploration geochemistry,
vol. 1, Elsevier Sci. Pub. Co., Amsterdam, Holland, 255 p.
Francois-Bongarcon, D., 1993, The practice of the sampling theory of broken ores; Can. Inst. Mining and
Metallurgy Bull., no. 970, pp. 75-81.
Francois-Bongarcon, D., 1998, Error variance information from paired data: applications to sampling theory: in
Vallee, M., and A. J. Sinclair (eds.), 1998, Quality assurance, continuous quality improvement and standards in
mineral resource estimation; Exploration and Mining Geology, v. 7, nos. 1 and 2, pp. 161-166.
Francois-Bongarcon, D., 1998, Extensions to the demonstration of Gys formula; in Vallee, M., and A. J.
Sinclair (eds.), 1998, Quality assurance, continuous quality improvement and standards in mineral resource
estimation; Exploration and Mining Geology, v. 7, nos. 1 and 2, pp. 149-154.
Griffiths, J. C., 1967, Scientific method in analysis of sediments; McGraw-Hill Inc., New York, 508 p.
Gy, P., 1979, Sampling of particulate materialstheory and practice; Elsevier Scientific Pub. Co., Amsterdam,
431 p.
Henley, S., 1981, Nonparametric geostatistics; Applied Science Pub., London, 145 p.
Hoffman, E. L., J. R. Clark and J. R. Yeager, 1998, Gold analysisfire assaying and alternative methods; ; in
Vallee, M., and A. J. Sinclair (eds.), 1998, Quality assurance, continuous quality improvement and standards in
mineral resource estimation; Exploration and Mining Geology, vol. 7, nos. 1 and 2, pp. 155-160.
Hoover, H. C., and L. H. Hoover, 1950, De re metallica (translated from the first Latin edition of 1556); Dover
Publ. Inc., New York, 638 p.

QA-QC in Mineral Exploration

94

Ingamells, C. O., 1974, Control of geochemical error through sampling and subsampling diagrams; Geochimica
et Cosmochimica Acta, v. 38, pp. 1225-1237.
Isaaks, E. H., and R. M. Srivastava, 1989, An introduction to applied geostatistics; Oxford Univ. Press, New
York, 561 p.
Iso, 1992a, Quality management and quality assurance standardsguidelines for selection and use; Can.
Standards Assoc., Rexdale, Ont., 16 p.
Iso, 1992b, Quality management and quality system elementsguidelines; Can. Standards Assoc., Rexdale,
Ont., 16 p.
Jen, L. S.,1992, Co-products and by-products of base metal mining in Canada: facts and implications; Can. Inst.
Min. Metall. Bull., v. 85, no. 965, pp. 87-92.
John, M., and H. Thalenhorst, 1991, Dont lose your shirt, take a bulk sample; in Proc. Of a Seminar entitled
Sampling and Ore Reserves, Prospector and Developers Assoc. of Canada, Toronto, Ont., March 23, pp. 1122.
Jones, C., D. Potter, K. Paris, M. Mamamoba, D. Hudawa and R. Obial; Busangdigging for the truth: the
Freeport due diligence; in More meaningful sampling for the mining industry, A. I. G. Bulletin 22, pp. 83-101.
Journel, A. G., and C. Huijbregts, 1978, Mining Geostatistics, Academic Press, London, U. K., 600 pp.
Kermack, K. A., and J. B. S. Haldane, 1950, Organic correlation and allometry; Biometrika, v. 37, p. 30-41.
Knoll, K., 1989, And now the bad news; The Northern Miner Magazine, v. 6, p. 48-52.
Kratochvil, B., and J. K. Taylor, 1981, Sampling for chemical analysis; Analyt. Chem., v. 53, no. 8, p. 924A938A.
Lemieux, E., Y Ruel and B. Parent, 2003, Mineral economicsprogress report on the pilot project to
implement ISO 9001-2 quality assurance systems in mineral exploration companies in Quebec; Can. Inst. Min.
Metall. Bull. v. 96, no. 1069, p. 91-94.
Lister, B., and M. J. Gallagher, 1970, An interlaboratory survey of the accuracy of ore analysis; Trans. Inst.
Min. Metall., p. B213-B237.
Long, S. D., 1998, Practical quality control procedures in mineral inventory estimation; in Vallee, M., and A. J.
Sinclair (eds.), 1998, Quality assurance, continuous quality improvement and standards in mineral resource
estimation; Exploration and Mining Geology, v. 7, nos. 1 and 2, p.117-128.
Mark, D. M., and M. Church, 1977, On the misuse of regression in earth science; Math. Geol. v. 9, no. 1, p. 6375.
McKinstry, H. E., 1948, Mining geology; Prentice-Hall Inc., New York, 680 p.
Miller, R. L., and J. S. Kahn, 1962, Statistical analysis in the geological sciences; John Wiley and Sons, Inc.,
New York, 483 p.
Northern Miner, 1998, Mining explained; The Northern Miner, Don Mills, Ontario, Canada, 150 p.
Ottley, D. J., 1966, Pierre Gys sampling slide rule; Can. Mining Jour., v. 87, no. 7, p. 58-62.
Pitard, F., 1989a, Pierre Gys sampling theory and sampling practice; vol. 1, heterogeneity and sampling;

QA-QC in Mineral Exploration

95

Chemical Rubber Co. Press, Boca Raton, Fl., 214 p.


Pitard, F., 1989b, Pierre Gys sampling theory and sampling practice; vol. 2, sampling correctness and sampling
practice; Chemical Rubber Co. Press, Boca Raton, Fl., 247 p.
Postle, J., B. Haystead, G. Clow, D. Hora, M. Vallee and M. Jensen, 2000, CIM standards on mineral resources
and reservesdefinitions and guidelines; Can. Inst. Min. Metall. Bull. Vol. 93, no. 1044, pp. 53-61.
Ripley, B. D., and M. Thompson, 1987, Regression techniques for the detection of analytical bias; Analyst, v.
112, p. 377-383.
Rogers, R. S., 1998, Forensic geology and mineral exploration projects; in Vallee, M., and A. J. Sinclair (eds.),
1998, Quality assurance, continuous quality improvement and standards in mineral resource estimation;
Exploration and Mining Geology, v. 7, nos. 1 and 2, p.25-27.
Schwarz, F. P., Jr., S. M. Weber and A. J. Erickson, Jr., 1984, Quality control of sample preparation at the
Mount Hope molybdenite prospect, Eureka County, Nevada; in Erickson, A. J., Jr., (ed.) Applied mining
geology; Amer. Inst. Min. Metall. Eng., Soc. Min. Eng., p. 175-187.
Severin, P., et al, 2002, Estimation of mineral resource and mineral reservesbest practice guidelines;
Sinclair, A. J., 1978, Sampling a mineral deposit for feasibility studies and metallurgical testing; in Mular, A.
L., and R. B. Bhappu (eds.), Proc. Symp. On mineral processing plant design, Amer. Inst. Min. Metall., p. 115134.
Sinclair, A. J., 1986, Statistical interpretation of soil geochemical data; in Fletcher, W. K., S. J. Hoffman, M. B.
Mehrtens, A. J. Sinclair and I. Thomson, Exploration Geochemistry: design and interpretation of soil surveys;
Soc. Econ. Geol., Reviews in Econ. Geol., v. 3, p. 97- 115.
Sinclair, A. J., 1987, Statistical interpretation of soil geochemical data; in Fletcher, W. K., et al (eds.),
Exploration geochemistry: design and interpretation of soil surveys; Reviews in Economic Geology, v. 3, p. 97115.
Sinclair, A. J., 1998, Exploratory data analysis: a precursor to resource/reserve estimation; Explor. And Mining
Journal, vol. 7, nos. 1 and 2, pp. 77-90.
Sinclair, A. J., and A. Bentzen, 1998, Evaluation of errors in paired analytical data by a linear model; in Vallee,
M., and A. J. Sinclair (eds.), 1998, Quality assurance, continuous quality improvement and standards in mineral
resource estimation; Exploration and Mining Geology, v. 7, nos. 1 and 2, p. 167-174.
Sinclair A. J., and G. Blackwell, 2002, Applied mineral inventory estimation; Cambridge Univ. Press,
Cambridge, U. K., 381 p.
Smith, L. D., 1996, Due diligencetechnical studies; Ann. Gen. Mtg., Can. Inst. Min. Metall., preprint, 6 p.
Sketchley, D. A., 1998, Gold deposits: establishing sampling protocols and monitoring quality control; in
Vallee, M., and A. J. Sinclair (eds.), 1998, Quality assurance, continuous quality improvement and standards in
mineral resource estimation; Exploration and Mining Geology, v. 7, nos. 1 and 2, pp. 129-138.
Smee, B., 1997, Quality control procedures in mineral exploration; Northern Miner, April 1997.
Smee, B., L. Bloom, M. Leaver, P. Hamlyn and M. Smith, 2003, Focus, certified reference material discussion;
Explore, no. 118, Jan. 2003, p. 9-13.
Smee, B. W., and C. R. Stanley, 2004, Technical Note: sample preparation of nuggety samples: dispelling
some myths about sample size and sampling errors; Explore, No. 126, p. 21-26.

QA-QC in Mineral Exploration

96

Springett, M., 1984, Sampling practices and problems; in Erickson, A. J., Jr. (ed.), Applied mining geology;
Amer. Inst. Min. Metall. Eng., Soc. Min. Eng., pp. 189-195.
Thompson, M., and R. J. Howarth, 1978, A new approach to the estimation of analytical precision; Jour.
Geochem. Expl., v. 9, p. 23-30.
Thompson, M., and R. J. Howarth, 1976a, Duplicate analysis in geochemical practice Part I. Theoretical
approach and estimation of analytical reproducibility; The Analyst, v. 101, p. 690-698.
Thompson, M., and R. J. Howarth, 1976b, Duplicate analysis in geochemical practice Part II. Examination of
proposed method and examples of its use; The Analyst, v. 101, p. 699-709.
Thompson, M., and R. J. Howarth, 1973, The rapid estimation and control of precision by duplicate
determinations; The Analyst, v. 98, no. 1164, p. 153-160.
Till, R., 1974, Statistical methods for the earth scientistan introduction; McMillan Press Ltd., London, 154 p.
Vallee, M., 1998, Sampling quality control; in Vallee, M., and A. J. Sinclair (eds.), 1998, Quality assurance,
continuous quality improvement and standards in mineral resource estimation; Exploration and Mining
Geology, v. 7, nos. 1 and 2, pp. 107-116.
Vallee, M., 1998, Quality assurance, continuous quality improvement and standards; Exploration and mining
geology, vol. 7, nos. 1 and 2, p. 1-13.
Vallee, M., and A. J. Sinclair, 1997, Efficient mining and mineral processing depend on high quality geology,
data gathering and evaluation procedures; CIM Bull., June, p.76-79.
Vallee, ;M., M. Filon and M. David, 1976, Of assays, tons and dollars, or Can you trust gold assay values?;
preprint of presentation at CIMM Ann. Mtg, Quebec, Que., April 25-28, 1976, 32 p. plus figures and tables.
Walpole, R. E., and R. H. Myers, 1978, Probability and statistics for engineers and scientists; McMillan Pub.
Co. Inc., New York, 580 p.

QA-QC in Mineral Exploration

97

You might also like