Table of Contents Author Guidelines Submit a Manuscript
Journal of Automated Methods and Management in Chemistry
Volume 2008, Article ID 498921, 14 pages
Research Article

Increasing Efficiency and Quality by Consolidation of Clinical Chemistry and Immunochemistry Systems with MODULAR ANALYTICS SWA

1University Department of Laboratory Medicine, Hospital of Desio, Via Benefattori 2, 20033 Desio Milano, Italy
2Department of Pathology, Beth Israel Deaconess Medical Center, Boston, MA 02215-5400, USA
3Gemeinschaftspraxis Dr. med. Bernd Schottdorf u.a., 86154 Augsburg, Germany
4Department of Clinical Chemistry, Ziekenhuis Rijnstate Arnhem, 6800 TA Arnhem, The Netherlands
5Department of Clinical Chemistry, Georg-August-University Göttingen, 37075 Göttingen, Germany
6Hospital de la Plana, Vila Real 1254, Castelló, Spain
7Roche Diagnostics GmbH, Sandhofer Street 116, 68305 Mannheim, Germany
8Roche Diagnostics Operations, Inc., 9115 Hague Road, P.O. Box 50416, Indianapolis, IN 46250, USA

Received 25 October 2007; Accepted 19 December 2007

Academic Editor: Peter Stockwell

Copyright © 2008 Paolo Mocarelli et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


MODULAR ANALYTICS Serum Work Area (in USA Integrated MODULAR ANALYTICS, MODULAR ANALYTICS is a trademark of a member of the Roche Group) represents a further approach to automation in the laboratory medicine. This instrument combines previously introduced modular systems for the clinical chemistry and immunochemistry laboratory and allows customised combinations for various laboratory workloads. Functionality, practicability, and workflow behaviour of MODULAR ANALYTICS Serum Work Area were evaluated in an international multicenter study at six laboratories. Across all experiments, 236000 results from 32400 samples were generated using 93 methods. Simulated routine testing which included provocation incidents and anomalous situations demonstrated good performance and full functionality. Heterogeneous immunoassays, performed on the E-module with the electrochemiluminescence technology, showed reproducibility at the same level of the general chemistry tests, which was well within the clinical demands. Sample carryover cannot occur due to intelligent sample processing. Workflow experiments for the various module combinations, with menus of about 50 assays, yielded mean sample processing times of < 38 minutes for combined clinical chemistry and immunochemistry requests; < 50 minutes including automatically repeated samples. MODULAR ANALYTICS Serum Work Area offered simplified workflow by combining various laboratory segments. It increased efficiency while maintaining or even improving quality of laboratory processes.

1. Introduction

The clinical laboratory is arguably the frontrunner in applying scientific discoveries and technical innovations to patient care. For example, there are not only far more tests readily available now compared to just twenty years ago but also the tests themselves have increased sensitivity and specificity (e.g., hs-CRP, ferritin). It has been estimated that about 65% of medical decisions are based on laboratory tests [1].

Paradoxically, the clinical laboratory success has placed it under even greater pressure to produce more and better test results, with shorter turnaround times and at lower costs. As clinical laboratories have evolved, they have relied heavily on automation. By moving from manual assays of single analytes to random access, multichannel, automated instruments, and more tests can be done, more frequently, with fewer people. As noted in recent publications, by combining several of these instruments into a novel single platform for the clinical chemistry [2] and for the immunochemistry laboratory [3], these analysers represented a new degree of consolidation.

However, there has been little integration of traditional clinical chemistry (ISE, spectrophotometry, homogeneous immunoassay) and heterogeneous immunoassay. From an analytical and technology perspective, the separation of the two types of analysers may make sense. But, from a medical perspective, of course, the separation is entirely artificial. For the patient in the emergency room, the physician needs to know the troponin and the potassium. For the oncology patient, the physician needs to know the CEA as well as the calcium. Does it make sense to draw two tubes of blood to insure quick turnaround time by running the sample on two analysers simultaneously? Or, if just one tube is drawn, is it the only solution to insure quick turnaround time by asking a technologist to make sure that, as soon as the tube is finished on the chemistry analyser, it gets placed on the immunoassay analyser to be analysed there? With either scenario, there are inherent inefficiencies, as compared to running a single tube on a single system for all the requested tests. MODULAR ANALYTICS SWA (in USA: Integrated MODULAR ANALYTICS, IMA), thereafter MODULAR system, represents the integration of comprehensive systems for traditional clinical chemistry and for heterogeneous immunoassays into a single system for essentially all chemistry analytes.

Here we present the results of our studies at 6 laboratories with a single system processing a selection of 30 to 50 different tests for clinical chemistry, specific proteins, therapeutic drugs, and immunochemistry determination.

Our goals were to

(1)evaluate the functionality and practicability of the analyser;(2)determine whether improved efficiency would be realized by integrating clinical chemistry with heterogeneous immunoassay testing;(3)test for possible effects on the quality of results (reproducibility, carryover) due to consolidation. In addition, we predicted that there would be a reduction of sample splitting, the elimination of multiple user interfaces, and a reduction of hands-on labour.

Experiments were performed on MODULAR system in five laboratories over a period of five months. At a sixth site, a larger hardware configuration was tested afterwards.

2. Materials and Methods

MODULAR ANALYTICS Serum Work Area combines previously evaluated modular systems for clinical chemistry and immunochemistry: MODULAR ANALYTICS D,P and MODULAR ANALYTICS E [2, 3].

The MODULAR system consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and four kinds of analytical modules—an ISE module for the electrolytes Na, K, and Cl with a maximum throughput of 900 tests/hour, a P800 module with a capacity of 44 spectrophotometric tests on board and a maximum throughput of 800 tests/hour, a D2400 module with 16 spectrophotometric tests and a maximum throughput of 2400 tests/hour, and an E170 module using the electrochemiluminescence technology with a capacity of 25 immunochemistry reagents on board and a throughput of up to 170 tests/hour. The configurations of MODULAR system are versatile and allow customised module combinations for various laboratory workload patterns. Of the several available hardware combinations, three different combinations of the clinical chemistry modules D and P and the immunochemistry module E were evaluated at the six sites (3 PE , 2 PPE , and 1 DPE ); all systems included an ISE module. Figure 1 shows the schematic structure of MODULAR system.

Figure 1: Schematic structure of MODULAR system.

The instruments used in the different laboratories for comparison with MODULAR system during the workflow study were MODULAR ANALYTICS P , PP , E , Elecsys 2010 (Elecsys is a trademark of a member of the Roche group), Hitachi 747 and 917, all from Roche Diagnostics (Mannheim, Germany), the BNA II protein analyser from Dade Behring (Liederbach, Germany), the ADVIA Centaur and ACS: 180 from Bayer (Tarrytown, NY, USA) and the AxSYM from Abbott Laboratories, (Abbott Park, Illinois, USA).

The methods selected for the workflow studies covering approximately 80 analytes with 30 to 50 applications per laboratory are summarised in Table 1. For the imprecision runs and functionality testing, only a subset of these methods was processed at each laboratory. The reagents for MODULAR system were the respective system packs from Roche Diagnostics. The calibration of the tests was done according to the requirements set by the manufacturer using the calibration materials from Roche Diagnostics. The daily quality control was performed with control sera also provided by the manufacturer.

Table 1: List of analytes used during the performance evaluation and within-run imprecision for selected analytes (cells with a CV number: analytes were used for within-run imprecision, x: analytes were added for the workflow experiments; CS1 control serum PNU from Roche, HS human serum pool, HU human urine pool, analyte concentrations within or slightly above reference range; CS pool, control serum of a low- and high-level control).

Depending on the analyte, either control material or human specimen pools were used for the imprecision and routine simulation imprecision experiments. Samples for the workflow experiments included serum, heparinized plasma and urine from the daily routine.

The performance evaluation was supported by CAEv, a program for Computer Aided Evaluation [4]. This program allows the definition of experiments, the sample and test requests, on-line/off-line data transmission, and the immediate data validation by the evaluators.

3. Evaluation Protocol

3.1. Within-Run Imprecision

Two control materials (serum, urine) with different concentrations of the analyte (or, for some analytes, a human specimen pool at the diagnostic decision level) were used. The experiment was performed on two days with 21 aliquots per run.

3.2. Precision in a Simulated Routine Run

Experiments for routine simulation are designed for functionality testing of an analytical system in the clinical laboratory. The protocol [5] has proven to be a useful tool during various analyser evaluations [6].

This particular experiment tests for potential systematic or random errors by comparing the imprecision of the reference results (standard batch, 𝜇 ) with results from samples run in a pattern simulating routine sampling (randomized sample requests, 𝜇 ). The randomized sample requests were simulated in CAEv [4] according to each laboratory’s routine sampling pattern. The samples were control materials or patient sample pools. The number of requests varied with module combination, but was aimed at keeping the analyser in operation for at least four hours. The second and third of the three experiments processed at each site included provocation incidents like reagent or sample shortage, barcode read errors, and various reruns.

3.3. Sample Carryover

Potential sample related carryover was investigated using a slightly modified version of the Broughton protocol [7]. Only analytes with a very high physiological concentration range were tested. Ideally, the ratio of the concentrations of the high and low samples should be, depending on the analyte, 103 to 106. Three aliquots of a high concentration sample 𝜇 were followed by measurements of five aliquots of a low concentration sample (l1 𝜇 l5) on each module. The sequence 𝛽  l1 l2 l3 l4 l5 was repeated five times.

Each sample was measured on the ISE module first, then on the D and/or P module, and finally on the E170 module, thereby insuring that reusable pipette probes were introduced multiple times prior to sampling on the E170 module, where disposable (nonreusable) pipette tips are used. If a carry-over effect from the ISE and D/P module sample probes exists, the l1 will be the most influenced, and the l5 will be the least influenced aliquot when measured on E-module. The carry-over effects were compared with the imprecision of the low concentration samples and the diagnostic relevance of the respective E-module assays. Potential sample carryover of the following analytes was tested: AFP, CEA, ferritin, anti-HAV, HBsAg, hCG + , and t-PSA.

3.4. Workflow Study

The participating sites performed this study to investigate whether or not MODULAR system met their routine laboratory specific needs, especially for improved efficiency. As shown in Table 2, module combination, analyte assignment, tests per sample, numbers of samples, samples per module, and tests per module, were very different at each laboratory. Three methods were used to capture the test requests on samples so that the same testing could be repeated on MODULAR system. Test requests were either downloaded from the laboratory’s LIS to CAEv, captured directly by CAEv from several analysers during routine operation or CAEv provided a “characteristic” request list by simulation based on typical test frequencies and profiles of the laboratory. In all cases, the same sample set, usually a predefined substantial portion of a day’s workload was processed on MODULAR system.

Table 2: Overview of processed workloads at the participating laboratories. (For explanation see materials and methods section, workflow study.)

Samples were loaded on MODULAR system chronologically as they appeared in the lab to mimic the laboratory’s routine pattern of receiving samples. All relevant time steps and workload related activities like sample and reagent handling, instrument preparation, loading and reloading of sample racks, and technologist time (both hands-on and walk-away) were measured.

3.5. Practicability

Practicability of the system was assessed throughout the study. A questionnaire—a supplement to the general questionnaire [8], which was previously used for the assessment of the single modules—was designed especially for a consolidated sample working area. This allowed for a standardized grading with the main focus on aspects of clinical chemistry and immunochemistry consolidation and new software features.

3.6. Expected Performance

The protocol included expected performance criteria which were agreed upon at the evaluators’ first meeting. The criteria for imprecision were based on state-of-the-art performance, routine requirements of the laboratories, and statistical error propagation [9].

4. Results

Across all experiments, 236000 results from 32400 samples were generated using 93 methods.

4.1. Imprecision

The within-run imprecision met the expected performance criteria at virtually all sites. Typical within-run CVs for the enzyme and substrate analytes were 1 to 2%, for the ion selective electrode (ISE) methods 0.5%, for the specific proteins and drug analytes 1 to 3%, for the urine chemistry methods 1 to 2%, and for the heterogeneous immunoassays (with the indication: thyroid, cardiac, anaemia, tumour markers and fertility) 1 to 3% (Table 1).

4.2. Functionality Testing

The six laboratories performed 44668 determinations during the random part of the routine simulation covering 87 analytes in 733 series. CVs obtained from the precision in a simulated routine run experiment for the various assay groups (ISEs, enzymes/substrates, urine analytes, proteins/TDMs, and heterogeneous immunoassays) were summarized in distribution diagrams for the reference (batch part) and random part (see Figure 2). Out of all 733 series, 13 (1.8%) showed higher CVs than the expected limit in the random part (9 in the enzyme/substrate group, 2 in the urine and the immunoassay groups). Seven of these CVs were only moderately increased (1 to 2% higher than the limit). Of the remaining 6 series (5.3 to 22.8% CV), the highest CV was caused by an unexplainable, nonreproducible outlier with a very low result in one series of the albumin in urine test. With the outlier removed, the CV was 1.2%. In all cases, the higher CVs were observed in only one of the three simulated routine series per laboratory (with tests like lipase, uric acid, albumin in urine and CA125) and there was no association with any malfunction of the instrument or reagent. A software issue associated with the E-module masking/unmasking during a provocation was also identified during these experiments (shift of the results with the FT3 assay).

Figure 2: Precision in a simulated routine run; distribution of 733 within-run CVs in reference (batch) and random parts; replicates 𝛽 in reference part 15 as follows: (i) expected performance limit for within-run imprecision (solid line) (ii) expected performance limit for randomised runs (dashed line).
4.3. Sample-Related Carryover

Table 3 summarizes the carry-over effects seen when the high priority settings were intentionally turned off for a group of tests that were considered high risk for sample carryover. Only results from laboratories with the highest concentration ratio (high/low) are included in the table. For the 7 assays for which we expected to see sample-related carryover because of the wide dynamic range of the analytes, our testing indicated potentially clinically relevant problems with 5 (AFP, CEA, HBsAg, HCG + , and t-PSA). By utilizing the “high priority test” option, samples with requests for these assays, which also had requests for ISE, D, and/or P module tests, were automatically processed at the E-module first, eliminating the possibility for carryover to occur for these samples and tests. In the other two (ferritin and anti_HAV), neither criterion for carryover was met (more than 10% of the (lower) medical decision level, or exceeding the 2 SD value). According to investigations of the manufacturer, two additional carry-over sensitive infectious disease assays were identified: anti-HBs and anti-HBc.

Table 3: Sample related carryover with high priority test option off. With high priority test option on, sample carryover cannot occur. (For explanation see results section, sample-related carryover.)
4.4. Workflow

The module combinations ( 𝜇 PE 𝜇 , 𝜇 PPE 𝜇 , 𝜇 DPE 𝜇 ) and test menu configurations used at the different laboratories were selected to meet their specific workload demands. An overview is presented in Table 2. To reflect true routine conditions, the samples were placed on the system in a sequence mimicking the original arrival pattern in the laboratory, rather than continuously, to test the system’s potential sample loading capacity. The resulting cumulative throughput was up to 800 results/hour using 𝜇 PE 𝜇 module combinations and up to 1580 results/hour for 𝛽 PPE module combinations. A throughput of approximately 2160 results/hour was yielded on the DPE module combination in laboratory 6. In most of the laboratories, the number of samples processed was not enough to reach the system’s maximum throughput capacity.

In addition to throughput, we looked carefully at sample processing time (SPT), the time between sample registration (barcode reading on the instrument) and the time the last result for that sample is produced. Note that SPT differs from sample turnaround time (TAT), a commonly used term to describe the time period from when the samples arrive in the laboratory and the availability of the last result.

The following mean sample processing times were found for the different sample groups in five laboratories:

(i) 13–18 minutes for samples with general chemistry requests only (ISE + P or ISE + P1 + P2),(ii) 22–28 minutes for samples with immunoassay requests only (E),(iii) 29–38 minutes for samples with combined requests (ISE + P + E or ISE + P1 + P2 + E). The mean SPTs obtained with a DPE combination were comparable: 16 minutes for ISE + D + P, 26 minutes for E, and 27 minutes for ISE + D + P + E.

We compared SPT of MODULAR PPE with the current six dedicated routine analysers for a predetermined time period, representing approximately 40% of a day’s workload in laboratory 5. Figure 3 shows that the time to results for samples with clinical chemistry requests on MODULAR system is comparable with that of the dedicated routine analysers (mean time 15 minutes, 80th percentile 20 minutes, maximum 38 minutes). Samples with combined requests for both clinical chemistry and immunochemistry were processed faster (mean time 34 minutes, 80th percentile 40 minutes, maximum 1 hour) than on the dedicated analysers (mean time 46 minutes, 80th percentile 58 minutes, maximum 1.8 hours).

Figure 3: SPT on MODULAR system and dedicated routine analysers representing 40% of a daily routine workload.

Depending on test, module and number of racks waiting in the rerun buffer, rerun results are reported 10–35 minutes after availability of first results. An example of typical processing times to first results and to final results (including rerun samples) is shown in Figure 4.

Figure 4: SPT with focus on availability of rerun results.

MODULAR SWA supports “reflex testing,” if the laboratory information system (LIS) offers this functionality. Frequently practiced for certain indication fields, this feature allows the automatic request of a further analyte, if a predefined concentration or concentration range of the originally requested analyte is exceeded. Examples are as follows: If TSH < 0.27 or > 4.2 mIU/L, FT4 is determined in addition, if PSA > 4.0  g/L, free PSA is also measured and so on. Even though it may no longer be as clinically relevant, reflex testing functionality was assessed using a combination of P-and E-requests: CK CK-MB(enzymatic) + TnT. The SPT for such a sample with two additional reflex tests was increased by 30 to 55 minutes (Figure 5).

Figure 5: SPT with focus on reflex testing.

Does the sample carry-over setting, which tags the assay in question automatically as high priority by the system, influence the SPT? We compared samples having combined requests (on P- and E-module) with and without high priority assays. With auto rerun off, there was no result delay. The processing times were increased by 10–15 minutes with auto rerun activated, where processing on P module was delayed until final E-module results were available.

Maintenance and troubleshooting are activities which may also considerably influence the daily workflow. For a modular system, the question arises whether the entire system or only the affected module is blocked in order to remedy a problem after, for example, a sampling stop alarm. This type of alarm results in the module discontinuing pipetting of samples. The different time steps for two such alarms were monitored on a PPE combination at one site. For a provoked tip/vessel pickup-error on the E-module, the elapsed time from getting the alarm, allowing the module to finish the tests in process, taking the module down, then fixing the problem, and getting the module back into operation was a total of 35 minutes; for a provoked abnormal cap mechanism movement 22 minutes. While the E-module was unavailable, the ISE and P-modules continued to process samples, and samples requiring E-module tests were stored in the rerun buffer to be run automatically when the E-module came back online.

An important aspect of instrument consolidation on a single platform is reduction in personnel hands-on time. In laboratory 5, we compared hands-on time associated with MODULAR system with that of the 6 existing dedicated analysers. As shown in Figure 6, the operators saved about 10 hours based on the sample workload; the main contribution was sample handling time. MODULAR system was operated by 1 technologist while the 6 dedicated analysers required 3 persons.

Figure 6: Hands-on time on MODULAR system compared to dedicated routine analysers representing 40% of a daily routine workload.

One of the participating laboratories (laboratory 1) simulated a workflow using MODULAR system as a dedicated immunoassay analyser. Tests included 24 homogeneous tests (10 specific proteins, 6 therapeutic drug tests, and 8 drugs of abuse tests) on P-module and 18 heterogeneous assays (thyroid, cardiac, anaemia, and tumour markers) on the E-module, with samples loaded in a simulated routine-type pattern. The average sample processing times for the various request patterns were comparable with those mentioned previously (<35 minutes).

4.5. Practicability

With the aid of a questionnaire, the practicability of MODULAR system was graded as equally good (23% of all scores) or even better (68%) compared to the evaluators’ currently used routine analysers.

5. Discussion

Overall assessment of the experiments can be rated as positive. It was the first time that there was an opportunity during an evaluation to combine various laboratory segments with an extensive menu for general chemistry, specific proteins, drugs, and immunochemistry on one platform.

5.1. Imprecision

Since analytical performance was previously verified for the single MODULAR systems [2, 3], this study did not include extensive analytical performance data. However, one or two imprecision runs were processed for representative tests from each analyte group to assure that the system was performing correctly. Typical within-run CVs of 1 to 3% across the menu of nearly 90 tests were all within the expected performance and can be rated as excellent. We can emphasize here that the heterogeneous immunoassays performed with the electrochemiluminescence technology showed reproducibility similar to the general chemistry tests and well within clinical demands (see Table 1) [1012].

5.2. Functionality

The overall low CVs for all analyte groups in the simulated routine imprecision runs proved that general chemistry and immunochemistry worked very well together, and, that even under simulated stress routine conditions, there was no indication of systematic or random errors. The 6 high CVs of the routine simulation experiment occurred in only one of 3 runs per laboratory, and there was no indication that the deviant results were reproducible. The routine simulation precision experiments demonstrated good performance and full functionality of the instrument. Because of the sensitivity of the experimental design, it was possible to identify one severe instrument problem associated with the E-module masking/unmasking feature during provocation of the analyser. The error was corrected with a software upgrade and the correct implementation was confirmed with further routine simulation runs at all sites. Throughout all other routine environment testing, the instruments reacted correctly based on the routine simulation data.

5.3. Sample Carryover

MODULAR system runs with new user software, combining and unifying the functionality and features of the single modules and optimizing the processing of clinical chemistry and immunochemistry requests. For example, sample carryover to some sensitive immunoassays cannot occur due to intelligent sample processing whereby samples with requests for carryover sensitive assays, referred to as high priority tests, are processed at the immunology module (E) first. High priority tests are user-definable and do not delay processing of other samples, even samples in the same sample rack. As mentioned in the Results section, processing samples with high priority requests with “Auto-rerun” activated took 15 minutes longer in comparison to the usual samples. This however, reduced potential risks and eliminated any manual operator intervention. If there are only very few specimens with concentrations above the upper measuring range limit of the high priority tests, the laboratory manager can decide to deactivate auto-rerun without any high risk of quality loss but with acceleration of result availability.

5.4. Workflow

Workflow depends strongly on the laboratory environment, the sample loading pattern, and on the MODULAR configuration. Our studies show that MODULAR system offers the flexibility to fit and meet the requirements of the individual laboratory. The variations in throughput at the different sites can be explained by the lab-specific workloads and sample loading patterns.

The processing times for the sample groups with general or immunochemistry requests were similar to those known from the respective stand-alone modules, thus showing that there was no relevant increase when combining photometric/ISE and E-modules. In other words, the immunochemistry module did not slow down the clinical chemistry modules. An average processing time of approximately 35 minutes for the combined group was rated as being very acceptable, bearing in mind that those samples were either measured sequentially on different routine instruments or required additional hands-on times for splitting/aliquoting in the routine with the current routine instrumentation. In fact, when these additional times were included, as done in one laboratory, the mean sample TAT decreased by three hours (from 3.5 to 0.5 hours) using MODULAR system.

One laboratory used the PE combination for simulating a dedicated immunoassay analyser covering various laboratory segments. In this hospital there is a separate sample collection and order process for certain analytes, which are presently performed on a variety of single analysers. Therefore, sample splitting is not necessary. The current dedicated analysers for protein determinations, for drug monitoring or tumour marker measurements could be replaced by a consolidated workstation, so that only one operator would be needed to perform these various immunoassays. The laboratory management assessed a 30 to 50% reduction of manpower for this work on MODULAR system.

During the daily routine, a certain percentage of assays (usually <5%) need a repetition of the analysis, because the measuring range or a defined repeat limit based on laboratory policy is exceeded. The portion of repeat measurements due to analytical range limitations on MODULAR system is usually smaller than 0.5% [2]. MODULAR system offers a user selectable automatic rerun feature, which can be activated or deactivated for each test.

The advantages of automatic rerun—no need for sample tracking, retrieval, elimination of manual sample predilution, and no manual reloading—not only increased safety of results by minimizing possible human error, but also reduced processing and hands-on times.

Also, the fact that MODULAR system supports reflex testing simplifies the workflow. It is not necessary to wait for the result validation and the confirmation from the ward to perform the additional reflex assay. This is especially important for outpatients since this procedure could avoid a second hospital visit. Even if samples are held for further tests, reflex testing is better than the alternatives—measuring for all tests at the start or manual intervention to locate and transport the samples. When including the benefits of automatic rerun analysis and reflex testing, results were available within 30 to 70 minutes.

Since the time of this evaluation, the use of MODULAR system has confirmed this data during a long period of routine work. When comparing the hands-on times captured at the different sites over one to two days, MODULAR system yielded a clear advantage. Monitoring over an extended period would be necessary to obtain more extensive data, but this exceeded the scope of the study. Nevertheless, it is obvious that there is a potential of saving personnel capacities since fewer instruments need fewer persons for operation. MODULAR system requires a skilled operator similar in qualification to that of the existing analysers compared in this study. However, this person must also be able to cope with the validation of a large amount of data produced within a short time or have autoverification available.

5.5. Practicability

The practicability of MODULAR system met or exceeded the requirements of all participating laboratories for 91% of all attributes rated. An opportunity for improvement was seen in the time required to prepare the analyser for routine use even though this was one half to three quarters of the time required for the dedicated routine analysers. Apart from the QC measurements which were processed directly before routine sampling start, the flexibility of MODULAR system with background maintenance features allows other tasks to be performed at any suitable time throughout the shift. Completion of initial QC measurements for the extended menu processed at the different sites took an average 30 minutes.

The main advantage mentioned by the evaluators was the consolidation effect resulting in a simplified workflow with a reduction of instruments, reduced overall processing time, reduced hands-on time, and increased efficiency without increasing staffing, yet maintaining or even improving quality.

6. Conclusion

Our experience with the MODULAR ANALYTICS SWA indicates that both functionally and practically the analyser is a favourable addition to the clinical laboratory. Each of the various module configurations included in this study is easily and efficiently managed routine and nonroutine tasks in the simulated routine scenarios. Overall, samples with combined requests running in routine workloads, from a menu of about 50 assays, were processed in approximately 35 minutes; 30 to 70 minutes including reruns and reflex testing. We saw no negative effects in the quality or timely reporting of test results when combining general clinical chemistry with heterogeneous immunochemistry assays on the same analyser. In fact, we found that efficiency was improved, and, in some cases substantially decreasing sample turn-around time, operator hands-on time, and personnel, while maintaining or improving the quality of laboratory processes.


The authors wish to thank all of their coworkers in the respective laboratories and departments participating in the study for their excellent support. The MODULAR instrument, personal computer with CAEv software, reagents, and disposables were supplied by Roche Diagnostics for the duration of the study.


  1. R. W. Forsman, “Why is the laboratory an afterthought for managed care organizations?” Clinical Chemistry, vol. 42, no. 5, pp. 813–816, 1996. View at Google Scholar
  2. G. L. Horowitz, Z. Zaman, N. J. C. Blanckaert et al., “MODULAR ANALYTICS: a new approach to automation in the clinical laboratory,” Journal of Automated Methods and Management in Chemistry, vol. 2005, no. 1, pp. 8–25, 2005. View at Publisher · View at Google Scholar
  3. C. Bieglmayer, D. W. Chan, L. Sokoll et al., “Multicentre evaluation of the E170 module for MODULAR ANALYTICS,” Clinical Chemistry and Laboratory Medicine, vol. 42, no. 10, pp. 1186–1202, 2004. View at Publisher · View at Google Scholar
  4. W. Bablok, R. Barembruch, W. Stockmann et al., “CAEv—a program for computer aided evaluation,” The Journal of Automatic Chemistry, vol. 13, no. 5, pp. 167–179, 1991. View at Publisher · View at Google Scholar
  5. W. Bablok and W. Stockmann, “An alternative approach to a system evaluation in the field,” Quimica Clinica, vol. 14, p. 239, 1995. View at Google Scholar
  6. F. L. Redondo, P. Bermudez, C. Cocco et al., “Evaluation of cobas integra® 800 under simulated routine conditions in six laboratories,” Clinical Chemistry and Laboratory Medicine, vol. 41, no. 3, pp. 365–381, 2003. View at Publisher · View at Google Scholar
  7. P. M. G. Broughton, A. H. Gowenlock, J. J. McCormack, and D. W. Neill, “A revised scheme for the evaluation of automatic instruments for use in clinical chemistry,” Annals of Clinical Biochemistry, vol. 11, no. 6, pp. 207–218, 1974. View at Google Scholar
  8. W. Stockmann, W. Bablok, W. Poppe et al., “Criteria of practicability,” in Evaluation Methods in Laboratory Medicine, R. Haeckel, Ed., pp. 185–201, VCH, Weinheim, Germany, 1993. View at Google Scholar
  9. P. Bonini, F. Ceriotti, F. Keller et al., “Multicentre evaluation of the Boehringer Mannheim/Hitachi 747 analysis system,” European Journal of Clinical Chemistry and Clinical Biochemistry, vol. 30, no. 12, pp. 881–899, 1992. View at Google Scholar
  10. C. G. Fraser, P. H. Peterson, C. Ricós, and R. Haeckel, “Criteria for imprecision,” in Evaluation Methods in Laboratory Medicine, R. Haeckel, Ed., pp. 87–99, VCH, Weinheim, Germany, 1993. View at Google Scholar
  11. C. G. Fraser, P. H. Peterson, C. Ricós, and R. Haeckel, “Proposed quality specifications for the imprecision and inaccuracy of analytical systems for clinical chemistry,” European Journal of Clinical Chemistry and Clinical Biochemistry, vol. 30, no. 5, pp. 311–317, 1992. View at Google Scholar
  12. C. Ricós, V. Alvarez, F. Cava et al., “Current databases on biological variation: pros, cons and progress,” Scandinavian Journal of Clinical and Laboratory Investigation, vol. 59, no. 7, pp. 491–500, 1999. View at Google Scholar