Quality Control (was Re: apology)

Dr. Martin R. Hadam (Hadam.Martin@MH-Hannover.de)
Sun, 25 May 97 10:56:54 +0100

On Fri, 23 May 1997 15:43:00 -0400, ruth ann croson lowney wrote:

>I would also like to know how you arrived at your assessment of
>misdiagnosis by flow being 10-15%. [..]
>While I understand that this was only one institution, one would hope that
>any licensed lab would follow a similar procedure.

I won't continue the discussion about rude posts (I think it was rude
but the apology was acceptable at large and added a valid argument).

I'll simply try to contribute a datapoint which may help you to accept
such 10-15% estimates. Even though it doesn't relate to clinical
diagnostic procedures it does impact quality control in flow cytometry
in general.

I have been evaluating for the past months the data established by the
6th International Workshop on Leukocyte Differentiation Antigens
(HLDA6). The entire Blind Panel data are accessible on the net. They
were donwloaded and transformed into a database. Local clustering done
here demonstrated that we had the correct data since we got the same
dendrograms as were distributed by the workshop.

As a measure of reproducibility I sorted out all those assays where
*identical* antibodies were assayed in duplicate with different blind
panel codes. Hence from a total of 547 blind panel antibodies 136 (25%)
could be assigned to 68 pairs of duplicates (including negative
controls; the few samples available in triplicate were only considered
as duplicates).

In total 81 assays on these 547 coded samples were performed by a
number of "reference labs" largely selected for competence in the
field. In theory, plotting only the duplicates as measured in a single
experiment should yield datapoints on a diagonal line when plotted
against each other. Unfortunately, this was not observed.

Hence we defined "missed duplicates" when the percentage of positive
cells in duplicate antibody pairs either deviated by (a) more than 10%
or
(b) more than 20%. Please note that many of the >20% differences are in
fact something like 0-100% pairs (i.e. entirely screwed up ones).
Simply counting the number of "missed duplicates" at both levels per
assay provided the following statistics for the entire blind panel:

Number of duplicates per assay deviating more than 10% (out of 68
total):
mean 13.3457
SD 7.5732
range 1 to 36

Number of duplicates per assay deviating more than 20% (out of 68
total):
mean 7.3457
SD 5.666
range 0 to 25

I leave it up the readers what to think of data when 25 out of 68
duplicate determinations differ by more than 20% each (or 36 out of 68
differ by more than 10%). Sure, those plots look strange <eg>.

I should add that those values vary a lot between laboratories but
*also* in between different assays performed by the *same* laboratory.

To me, this seems quite an accurate assessment of the state of quality
(control) in flow.

Martin R. Hadam
Kinderklinik - Medizinische Hochschule
D-30623 Hannover
Germany
Email: Hadam.Martin@MH-Hannover.de