Previous Page Table of Contents Next Page


9 EXTERNAL QUALITY CONTROL OF DATA


by L.P. van Reeuwijk and V.J.G. Houba*

* Part of the information in this chapter was drawn from: V.J.G. Houba and J.J. van der Lee (1995) and Houba et al. (1996).

9.1 Introduction
9.2 Check-analyses by another laboratory
9.3 Interlaboratory sample and data exchange programmes
9.4 Trouble-shooting
9.5 Organization of interlaboratory test programmes
9.6 Quality audit


9.1 Introduction

The quality control of data discussed in the preceding chapter is restricted to internal control. The processes should be monitored closely to see if any unacceptable deviations occur with respect to the situation in the previous period(s) where everything was considered to be under control. However, this is often only relative to own data and may lead to serious bias of the analytical results.

There are several ways to avoid or to discover systematic errors:

1. Use of spikes or pure analytes, e.g. calcium carbonate, gypsum, solutions of pure chemicals (see 7.5.6).

2. Use of independent standards or standard solutions (see 7.2.4).

3. Analyzing (certified) reference samples (see 7.5.1).

4. Exchange of samples with another laboratory or having some own samples analyzed by another laboratory.

5. Participation in interlaboratory sample exchange programmes (round robin tests).

The first three items have been discussed in Chapter 7, and in the ensuing paragraphs attention will be focused on the latter two means of quality control.

9.2 Check-analyses by another laboratory


9.2.1 Single value - single value check
9.2.2 Replicate data - single value check
9.2.3 Replicate data - replicate data check


If an error in a procedure is suspected and the uncertainty cannot readily be solved, it is not uncommon to have one or more samples analyzed by another laboratory for comparison. This is usually a related laboratory in the neighbourhood ("neighbourly help") or one belonging to the same umbrella organization as the laboratory itself. Sometimes, reputable laboratories elsewhere need to be consulted.

An inherent disadvantage of this procedure is that the results of the other laboratory may themselves be biased. To eliminate this, the check may have to be extended to more laboratories and has then, in fact, become a comparative interlaboratory study (see 9.3).

Three types of data comparison may be distinguished:

1. A single value of a laboratory is compared with a single value of another laboratory.
2. Replicate values of a laboratory are compared with a single value of another.
3. Replicate values of a laboratory are compared with replicate values of another.

9.2.1 Single value - single value check

If the test entails a simple comparison of two single values then the bias can easily be calculated with Equation (7.15) or (7.16). However, it should be realized that each single value carries a confidence range of ±2s (s = standard deviation of the method; see 6.3.4) so that there is a considerable chance of a false conclusion both in a positive and a negative way. Thus, although such a test may be informative in some cases, it can hardly be qualified as GLP.

9.2.2 Replicate data - single value check

This is a situation where replicate results are compared with a single value from another laboratory or with a target value not accompanied by a (meaningful) standard deviation, e.g. a median value from different labs with different methods ("consensus value") derived from a proficiency test. For the test for significance, the two-sided t-test can be used as expressed in Equation (6.12):

(6.12; 9.1)

where

¯x = mean of own results of a sample
m = target value
s = standard deviation of own results
n = number of own results

Example:

We use a variant of the example previously given to calculate bias with Eq. (7.16).

The target value of the Cu content of a sample is 34.0 mg/kg (the standard deviation of m is unknown here, otherwise 9.2.3.1 below is applicable). The results from 15 replicates with the laboratory's own method are: ¯x= 31.6 mg/kg, and s = 5.6. Using Equation (6.12) we calculate t = 1.66 which is less than the critical t-value (2.14, two-sided, df = 14; see App. 1) so we accept the null hypothesis and conclude that no significant difference is found between the target value and the results obtained by the laboratory (at the 95% significance level and with the number of replicates used).

9.2.3 Replicate data - replicate data check


9.2.3.1 Comparison of replicate results on one sample
9.2.3.2 Comparison of replicate results on multiple samples


Statistically, the most reliable comparison for bias is made between data resulting from replicate determinations. Now, two different kinds of check can be distinguished:

1. Comparison of replicate results on one sample.
2. Comparison of replicate results on multiple samples.

9.2.3.1 Comparison of replicate results on one sample

The message from the previous sections is clearly that if another laboratory is asked to perform a bias check, one should preferably ask for at least a duplicate determination. More replicates would further increase the confidence but to a decreasing extent (see 6.3.4). The test for significance of the bias is again a two-sided t-test as discussed with examples in Section 6.4.3.

9.2.3.2 Comparison of replicate results on multiple samples

This kind of data comparison cannot be considered a "quick check" as considerable work in both laboratories is involved. If the check is limited to a determination on two or three samples, for comparison the two-sided t-test can be used for each sample individually (as above in 9.2.3.1). If more than three samples are involved, the paired t-test can be considered (for examples see 6.4.3.4) and for more than six samples linear regression is indicated (for example see 6.4.4.2).

If, less commonly, precision of an analysis needs to be checked with another laboratory, at least seven replicates by both laboratories are recommended to allow for a reliable F-test (see 6.4.2).

9.3 Interlaboratory sample and data exchange programmes


9.3.1 Types of interlaboratory programmes
9.3.2 Proficiency testing
9.3.3 Examples: ISE and IPE


A laboratory which claims that it produces quality data should participate in at least one interlaboratory exchange programme. Accredited laboratories have to provide evidence that they are successfully participating in such a scheme of good national or international repute (these schemes themselves may be accredited).

9.3.1 Types of interlaboratory programmes

Various types of programmes are in operation among laboratories locally, regionally, nationally and internationally, as well as within umbrella organizations. Before joining a scheme the purpose of participation must be clear in order to make a sound choice. The following operational types can be distinguished:

1. Method-performance studies

1.1 Collaborative study: establishing the performance characteristics of an analytical method.
1.2 Comparative study: comparing analytical methods by comparing the results they yield.

2. Laboratory-performance studies

2.1 Proficiency test (one method): comparing the performance of laboratories on the basis of the same analytical method.

2.2 Proficiency test (different methods): comparing the performance of laboratories by comparing the results of their own methods.

3. Material-certification studies

3.1 Certification study: establishing benchmark values for components or properties of a material.

3.2 Consensus study: establishing characteristic values for components or properties of a material, for quality control.

The most common type in which laboratories participate for quality control is Type 2.2, the proficiency test where laboratories receive samples to be analyzed according to their normal procedures. Type 2.1 can run concurrently with Type 2.2 if sufficient participants employ the same analytical method. The same applies to Type 3.2 where a sample after having been analyzed by a large number of laboratories may be used as a "reference" sample. This is valuable material particularly for attributes for which no certified reference material (CRM) exists.

Note: This aspect may offer an attractive opportunity for laboratories to obtain a useful control sample: Arrange with the organizers of a round robin test programme to have a laboratory's own bulk sample used in a proficiency test. Part of the sample is used and the remainder is returned. This opportunity is offered by the WEPAL programmes (see 9.3.3).

Most other study types are usually executed by invitation: the organizing body select a number of laboratories to participate in a study, the results of which are made available to the whole laboratory community. For instance. Type 1.1 is aimed at validation of a method and may form the basis of an official national or international standard procedure. Type 3.1 is aimed at the preparation of CRMs.

9.3.2 Proficiency testing

Participation in interlaboratory exchange programmes allows an evaluation of the analytical performance of a laboratory by comparison with the results of other laboratories. Both accuracy and precision can be tested with statistical parameters such as means, standard deviations, repeatability and reproducibility emanating from the collected data. In addition, these schemes can be a useful source of reference samples which can be put to good use internally by participating laboratories.

The usual procedure is that subsamples of a large sample are sent to participating laboratories at regular intervals. Often, subsamples of certain large samples are sent repeatedly without the participants knowing this.

Depending on the material to be analyzed, the laboratories can follow their own analytical procedures (Type 3.2) or can perform the analyses according to a detailed extraction/destruction and measuring technique (Type 3.1). For example, to determine the inorganic chemical composition of dried, ground crop material, one is interested in total contents of components. In that case the laboratory results should tally, regardless of the preprocessing and/or measuring techniques. If that is not the case, the analytical procedures are incorrect. By contrast, determining total contents is rarely important when analyzing the inorganic chemical composition of soils and sediments, except for geological studies. For environmental and agronomic research one is much more interested in certain fractions of these total contents. For most elements, for example, aqua regia digestion yields only a part of the total contents. The magnitude of this part depends on the nature of the samples and on the form in which the elements occur (adsorbed, occluded, in minerals, etc.). In addition, there is a large choice of extractants which range from strong acids to unbuffered weak electrolyte solutions or just water. Accordingly, one can find very divergent values for the content of an element in the soil or sediment, depending on the extraction potential of the solution used. The conditions for digestion and extraction procedures must therefore always be stipulated in detail in a SOP.

When subsamples have been analyzed by participants for one or more attributes the results are sent to the scheme's bureau. Here the data are processed and reports of each round are sent to participants. After a number of rounds usually a more extensive report is made since more data allow more and better statistical conclusions. Participants can inspect their results and, when significant and/or systematic deviations are noticed, they may take corrective action in the laboratory.

Although the samples usually are analyzed by a large number of laboratories, the results should still be interpreted with caution. The analytical procedures used by participants may differ considerably which may lead to bias and imprecision (also in the consensus value). Even "true" values of certified reference samples, which were determined by a number of selected renowned laboratories, have occasionally been proven to be wrong. It may even be that some outliers are right and all other laboratories wrong. However, these are exceptional cases which, in fact, only underscore the usefulness of interlaboratory exchange programmes.

9.3.3 Examples: ISE and IPE


9.3.3.1 Data processing
9.3.3.2 Rating with t-value
9.3.3.3 Proficiency control chart
9.3.3.4 Rating with Z-score


As an example of schemes with good international reputation we mention the International Plant Analytical Exchange and International Soil Analytical Exchange (IPE and ISE) programmes. These are the oldest parts of WEPAL, the Wageningen Evaluating Programmes for Analytical Laboratories of the Wageningen Agricultural University. IPE, having over 250 participants from some 80 countries, is in operation since 1956. ISE, with more than 300 participants, was started in 1988. The operational procedures of WEPAL are given.

9.3.3.1 Data processing

For each round, data are collected for attributes analyzed by participants. The "normal" way of data treatment would be to calculate the mean and standard deviation and to repeat this leaving out the data beyond ±2s. However, in proficiency tests and consensus studies there is a preference for using the median value rather than the mean. The median is the middle observation of the sorted array of observations in the case of an odd number of observations. In case of an even number it is the mean of the two middle observations. Using the median rather than the mean reduces the influence of extreme data.

9.3.3.2 Rating with t-value

For each attribute the median value (m 1) and the median of absolute deviations (MAD, s 1) are calculated. The MAD (like the standard deviation, a measure for the spread of the data) is the median of the differences between each observation and the median.

When more than seven observations for a certain attribute are reported by participants, the following rating procedure can be performed: All values x for which:

(9.2)

are flagged with a double asterisk (**). The factor f is aimed at flagging 5% of the data and, assuming a normal distribution, is approximated by (0.7722 +1.604/n) × t, where t is the t-value in the two-sided 95% probability table with df = n -1 (see Section 6.4.1, Fig. 6-2, and App. 1). This procedure is repeated leaving out the data flagged with ** which then yields a second median (m 2) and a second MAD (s 2). These values are then substituted in Equation (9.2) and all results x now falling in the range delineated by that equation are flagged with a single asterisk (*). An example of such a data set is given in Table 9-1.

In this table there is a column for the MIC, the Method Indicating Code. With a maximum of four characters, the analytical procedures used by the individual participants are indicated to allow a better evaluation of the results. In this way, for instance, bias resulting from a particular digestion procedure may be revealed. Also, the reproducibility (see 7.5.2.1) of a particular method used by different participants can be calculated.

9.3.3.3 Proficiency control chart

When results do not significantly differ from the consensus mean, this does not necessarily imply that the analytical process is perfect. The observations may systematically lie above or below the mean or median. A kind of "proficiency control chart" can be constructed to reveal this. When using the relative deviation of the results from the median (or mean), i.e. the difference between the observation and the median is expressed as a percentage of this median (cf. CV, RSD). Plotting this against time and drawing the values of 2 × relative MAD in the graph (in Fig. 9-1: lengths of vertical bars; comparable to the Warning Limits of the Control Chart of the Mean, see 8.3.2), allows a laboratory to obtain an indication of the position of its own values and to see if there is a trend. In fact, the same quality control rules as used for the control chart of the mean can be applied.

Fig. 9-1. Proficiency control chart for the determination of boron in a crop as found by a participant in IPE during 1994 (six samples per two-months' round). The length of the vertical bars equals 2 × MAD (as % of median) and can be considered the Warning Limit. Two values appear to be beyond this Limit.

Table 9-1. Example of data presentation: results for the Al content in crop samples from IPE in Round 5, 1994 (in mg/kg).

Laboratory

Sample

MIC


White Cabbage

White Cabbage

Amaryllis (bulb)

Maize

Potato

Broad-beans


A

32.8

30.6

293

351

29.1

278

AA|E

B

67.0*

76.0**

678**

1051**

38.0

544**

DE|CB

C

69.1**

71.0**

441**

776**

39.0

343

AC|CB

D

31.3

27.5

196

311

24.7

260

EE|BF

E

9.0

41.0

284

352

34.0

306

EE|CB

F

34.6

36.4

290

336

30.3

309

DG|CB

G

36.8

36.0

176

262

32.0

166

-

H

86.5**

101.0**

354

353

43.0

353

DE|AB

I

-

30.2

-

-

47.7

-

DA|CB

J

42.0

36.9

178

288

32.2

202

AA|CB

K

41.3

45.5

208

319

44.9

227

-

L

33.0

35.0

160

220

32.0

166

EE|CB

M

172.7**

154.2**

274

254

80.1**

206

G|CB

N

76.5**

152.0**

190

281

57.4*

192

DG|CB

O

45.0

36.0

172

293

24.0

204

DB|CB

P

36.2

38.1

133

291

34.5

-

AA|CB

Q

47.3

45.8

252

293

46.5

221

-

R

37.0

36.0

195

220

77.9**

203

AB|AE

S

46.1

49.4

270

340

51.4*

294

AA|CB

T

26.1

26.7

274

332

21.9

274

-

U

89.0**

123.0**

382

459**

119.0**

406*

DA|CB

V

35.3

35.6

119

284

45.8

150

G|AE

W

49.5

45.6

326

346

41.6

322

DG|CB

X

67.0*

70.0**

683**

993**

56.0*

500**

-

Y

48.5

36.5

158

267

36.6

178

DB|CB

Z

22.1

21.3

112

184

27.9

123

AA|AA

AA

30.2

28.6

142

234

32.7

142

-

BB

67.1*

-

593**

713**

-

549**

G|L

CC

23.0

32.0

134

243

30.0

138

DB|CB

DD

34.0

35.0

210

280

31.0

165

DC|CB

EE

46.4

27.4

280

253

37.9

213

-

FF

24.4

23.4

106

184

26.8

109

G|CB

GG

32.3

31.8

196

295

31.3

203

DB|CB

Median: (1)

40.2

36.2

209

293

35.5

213


(2)

36.8

35.6

196

288

34.0

205


MAD: (1)

8.10

8.15

69.0

45.0

6.95

63.0


(2)

6.59

5.00

62.5

35.0

5.00

55.0


9.3.3.4 Rating with Z-score

Individual rating of the proficiency of a laboratory can also be done with the normal deviate or so-called "Z-score" which is based on the bias relative to the mean of all laboratories:

(9.3)

where

x = individual result
¯x = mean of all results
s = standard deviation of ¯x

Before the mean is calculated, outliers flagged with ** and * as described above are removed.

For easy visualization of Z, Figure 6-2 (p. 74) can be used: assuming a normally distributed collection of data, 5% of the Z-scores would fall outside the range -2<Z<2 (where x is more than 2s off from ¯x) and only 0.3% outside the range -3<Z<3 (see also Note 2 below).

Hence, the following rating is usually employed:


:satisfactory

2<

:questionable


:unsatisfactory

This origin of Z allows the Z-score for each attribute to be recorded on a kind of control chart derived from the Control Chart of the Mean as discussed in Section 8.3.2. A model is given in Figure 9-2.

Note 1. Here, again, individual ratings should be used cautiously as the system is relative to a consensus mean, outliers are not considered, and the data collection may not be normally distributed.

Note 2. The value of Z equals the value of ttab when n is large, and is approx. 2 at 95% confidence (two-sided).

Fig. 9-2. Model for a Z-score control chart for one attribute in six interlaboratory control samples per round. The value with the arrow indicates an outlier off the scale.

9.4 Trouble-shooting

Action must be taken when statistically significant deviations are scored, or when results are consistently above or below the mean (see Rejection Rules). This holds both for the internal control with control charts and for external control with round robin tests. The difference is that the results of the round robin tests always come with a time-lag: you cannot immediately repeat a batch or correct a problem. Clearly, corrective action must be taken as soon as problems are spotted, be it by internal or external control. Therefore, the ensuing discussion is not limited to problems emanating from third-line control only, but applies to all cases where problems are encountered.

One of the first actions must be to inspect whether the deviation occurs for only one control sample or round-robin sample, or whether several samples in one batch/round/period deviate (possibly without exceeding the Action Line or scoring asterisks). Earlier reports must be consulted to see if there have been problems previously with that specific attribute. If an extreme value is scored only once for a certain sample, this may indicate that this one measurement is wrong or that there is an unexpected matrix interference. It may be necessary to go back to the measurements in the archives to check this (audit trailing). This will include a re-check of the second-line (batch) control: was the result of the control sample correct? If no mistakes are found, the sample in question must be reanalyzed and in this analysis, for instance, the sample: liquid ratio may be varied. If anomalies in an attribute occur in several samples, the entire analytical procedure should be scrutinized critically. The following should then be inspected:

1. The results of the first-line check (calibration of equipment, etc., see also Chapter 5).

2. The results of measurements: these should be checked on the basis of the original signals, counts, absorbances, etc. (and not on the basis of the final results of software procedures).

3. The standard solutions used. This involves checking whether the manufacturer's values for standard solutions are correct, or whether the salts used are indeed primary standards and have indeed been pretreated correctly. These salts can lose or attract water. Standard solutions that have been kept for too long or in unsuitable bottles can change in concentration, e.g. because the bottle was not stoppered properly, allowing water to evaporate (see also 5.3).

4. The correctness of the pipettes and other volumetric glassware used. It is known that sometimes the volume of the adjustable pipettes, commonly used in laboratories, deviates from the guaranteed volume. Therefore all such pipettes should be tested regularly (see also 5.2.2.4).

5. The automatic pipetting of measuring equipment. Table 9-2 gives an example of deviations in the automatic pipetting equipment of a flameless atomic absorption spectrometer. This deviation from the given value may have great consequences for the standard series which is prepared by dilution of a standard solution with an injector and by standard addition.

6. In round robin programmes: the digestion and detection techniques followed. Information on this can be found in the MIC.

Table 9-2. Volume of the automatic injector of a sample changer of a flameless AAS (in mL).

Pump setting

Volume measured

Difference



absolute

relative

Old pump

5

6.3

+ 1.3

+ 26 %

10

13.0

+3.0

+ 30 %

20

20.7

+0.7

+ 3.5 %

25

25.6

+0.6

+ 2.5 %

New pump

5

6.5

+ 1.5

+ 30 %

10

11.1

+ 1.1

+11%

20

20.6

+0.6

+ 3 %

25

26.2

+ 1.2

+ 5 %

Some other sources of error are:

1. General

- Filter paper washed in acid can cause secondary reactions when soil suspensions prepared with unbuffered salt solutions are being filtered. This is particularly likely if the first portion of the filtrate is not removed.

- Old hollow cathode lamps can impair calibration graphs.

- Voltage fluctuations of electricity mains.

- Portable telephones can disturb the functioning of sampling machines causing them, for example, to skip a sample.

2. Contaminations

- Filter paper that has been taken out of its wrapping can absorb substances, particularly ammonia.

- NH3 can be produced in demineralized water by the slow breakdown of the resins used.

- Boron contamination can arise from laundered laboratory coats, through the release of perborate from the detergent.

- The paint from logos on certain new glassware may dissolve in the acid used for cleaning (and enter the glassware).

- The cooling circuit of GF-AAS can become clogged with rust after being washed out with tap water under high pressure.

- Zinc contamination may arise by dandruff from persons using anti-dandruff shampoo.

- Grinding can lead to contamination from the grinding apparatus. An example of this source of error is given in Table 9-3. Mill A has a cast iron casing; in mill B the casing is made of aluminium.

- Sieves may give off unwanted elements (e.g. brass: copper, zinc).

- Glassware may be contaminated by inadequate cleaning and rinsing. This may particularly occur when glassware is used for different analyses. Blank determinations may reveal such problems.

Table 9-3. Influence of grinding on the results of analyzing barley (in mg/kg). Mill A: cast iron casing; Mill B: aluminium casing.

Mill

Run

Al

Cu

Fe

Pb

Zn


1

12

5.32

420

0.03

25.4

A

2

11

5.36

454

0.01

25.8


3

24

5.31

487

0.08

26.1


1

102

7.13

94

0.14

26.1

B

2

112

6.45

91

0.19

26.3


3

104

6.46

74

0.14

25.7

9.5 Organization of interlaboratory test programmes

Although it is considered beyond the scope of the present discussion, for those who contemplate to organize an interlaboratory test programme, be it locally or wider, a few general remarks may be useful.

It was mentioned in the Preface that more and more governments are requiring accreditation from laboratories that carry out, for instance, environmental and ecological analyses for particular studies or for the establishment of data bases. This implies that accreditation bodies have been set up or are to be set up. Furthermore, it may become strategically useful that such accreditation bodies are recognized internationally to facilitate cross border acceptance of analytical results of an accredited laboratory should the occasion arise, e.g. by foreign or international organizations.

Note. Information on this aspect can be obtained from the International Laboratory Accreditation Conference (ILAC), P.O. Box 29152, 3001 GD Rotterdam, the Netherlands.

An accreditation body may delegate to a renowned laboratory the organization of a regional or national or even international interlaboratory test programme as part of a larger external quality assurance programme. This could also be executed by a cooperative organization of laboratories such as SPALNA (see note in Section 5.2.2.2). Numerous papers describe the organization of interlaboratory tests, e.g. International Standard ISO 5725 (latest ed.), Horwitz (1988), Funk et al. (1995), and Houba et al. (1996), and the reader is referred to such papers for further information.

Meanwhile, the assistance of such a cooperative organization or network need not be limited to round robin tests but can be extended to other essential aspects of quality assurance such as:

- Preparation of control samples

- Testing of methods

- Organization of Training Workshops (SPALNA does this, among others, for Equipment Maintenance, Analytical Methodology, and for Quality Management and Data Handling, both in the English and French language).

- Making available consultants for trouble shooting and quality audits.

Particularly for individual laboratories, but also for groups of laboratories, there are clearly organizational and budgetary advantages to join an existing laboratory network with these aims. If still the need for a local or regional network is felt, one laboratory (or group of laboratories) interested in improving the quality of their output could take the initiative to set up one. Some kind of cross-link with an established scheme elsewhere would be beneficial, particularly in the initial stage.

9.6 Quality audit

As stated in Chapter 1, when the desired quality level of the output of the laboratory is reached, it must be maintained and, where necessary, improved. To achieve this, the Quality Manual should contain a plan for regular checking of all quality assurance measures as they have been discussed so far. Such a plan would include a regular reporting to the management of the institute or company.

This is usually done by the head of laboratory and/or, if applicable, by the quality assurance officer.

In addition to such a continuous internal inspection, particularly for larger laboratories it is very useful to have the quality system reviewed by an independent external auditor. For accreditation this is even an inherent part of the process.

An external audit can assist the organization to recognize bottlenecks and flaws. Such shortcomings often result from insufficient and inefficient measures and activities which remain unnoticed or are ignored.

An audit can be requested by the laboratory itself or by the management of the institute and involves basically the inspection of the Quality Manual, i.e. all the protocols, procedures, registration forms, logbooks, control charts, and other documents related to the laboratory work. Attention is not only given to the contents of the documents, but also to the practical implementation ('say what you do, do what you say, and be able to show what you have done'). Laboratory staff sometimes see these audits as a sign of suspicion about their performance, and sometimes audits may be (mis)used to get things organized or changed under the guise of quality. Yet, the auditor should not be seen as a policeman but as someone who was asked to help. Therefore, good cooperation with the auditor is essential for the effectiveness of the audit. Conversely, the auditor should be selected carefully for the same reason.

In large laboratories it may be advisable to have the audit done by more than one person, for instance an organization specialist and an analytical expert.

The audit should result in a report of findings and recommendations to improve possible shortcomings. Subsequently, the management which will have to decide to what extent the report will remain confidential, and if and what actions will have to be taken.

Wageningen Evaluating Programmes for Analytical Laboratories (WEPAL)

The world's largest laboratory-performance study schemes for the analysis of soils, sediments, crops, manures and refuse materials are included in the Wageningen Evaluating Programmes for Analytical Laboratories (WEPAL), organized by the Department of Soil Science and Plant Nutrition of the Wageningen Agricultural University, the Netherlands. These programmes are:

International Plant-analytical Exchange (IPE)

A laboratory-performance study on me inorganic chemical analysis of crop material. Every two months the participants receive six dried, ground, crop samples in coded plastic sample bags. The participants analyze these crop samples according to their own usual techniques (extraction and/or destruction, measurements). At the end of the test period the results are sent to Wageningen on pre-printed forms supplied by WEPAL. where they are processed. The participants are informed of the outcome within three weeks of the end of the test period. The results are accompanied by information about fee digestion and detection technique, given via a four-letter code. The programme was initiated some 40 years ago (in 1956) and has currently about 250 participants from 80 countries.

International Soil-analytical Exchange (ISE)

A laboratory-performance study on (mainly) chemical analysis of soils. Initiated in 1988, this programme has at present almost 300 participants from 80 countries who receive four dried, ground, soil samples every three months. These samples can be analyzed to determine fee total content of many elements, hut can also be submitted to a variety of extraction procedures, as well as to the determination of soil properties such as pH, conductivity, cation exchange capacity, clay content. The further organization and processing of data, including fee denotation of fee digestion and detection techniques followed, are similar to those of IPE.

International Sediment Exchange for Tests on Organic Contaminants (SETOC)

A laboratory-performance study dealing wife organic substances in soils and sediments. This study started in 1992 and has currently 90 participants. The organization, frequency of rounds, and reporting are as for ISE. The participants in this programme can report contents for 16 PAHs, 12 PCBs, 27 organochlorine pesticides and several heavy metals. This test is organized jointly wife fee Institute for Environmental Studies at the Free University of Amsterdam, fee Netherlands.

International Manure and Refuse Sample Exchange Programme (MARSEP)

A laboratory-performance study on chemical composition of manures, composts and sludges. This programme was started in 1994 and has currently 75 participants. The samples can be analyzed on real total and "total" contents of many elements. The organization, frequency of rounds, reporting, as well as fee coding of fee digestion and detection techniques followed, are similar to those of ISE.

For reasons of confidentiality of fee results, participants may opt for a code name in the reports. The organization has equipment which can automatically divide large amounts of sample material into representative subsamples for all programmes.

Reference materials

WEPAL offers participants fee opportunity to send dry bulk samples (50 kg of soil or 6 kg of plant material) for use as sample in a test round. The remainder of fee material (about 1/4) will be returned to sender and can then act as a valuable internal reference sample wife consensus values.

For more information contact:

WEPAL, Dept. of Soil Science and Plant Nutrition
Wageningen Agricultural University
P.O. Box 8005
6700 EC Wageningen, the Netherlands.
E-mail: [email protected]
Internet: http://www.benp.wau.nl/wepal


Previous Page Top of Page Next Page