About this Journal Submit a Manuscript Table of Contents
ISRN Critical Care
Volume 2013 (2013), Article ID 347346, 6 pages
Research Article

Comparing Drug-Drug Interaction Severity Ratings between Bedside Clinicians and Proprietary Databases

1Cardiothoracic Intensive Care Unit and Department of Pharmacy and Therapeutics, University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA 15261, USA
2Department of Pharmacy and Therapeutics and Critical Care Medicine, Clinical Translational Science Institute and School of Pharmacy, Center for Pharmacoinformatics and Outcomes Research, University of Pittsburgh, Pittsburgh, PA 15261, USA
3Department of Pharmacy, University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA 15261, USA
4Medical Intensive Care Unit and Department of Pharmacy and Therapeutics, University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA 15261, USA
5Department of Pharmacy and Therapeutics, School of Pharmacy, University of Pittsburgh, Pittsburgh, PA 15261, USA
6Surgical Intensive Care Unit and Department of Pharmacy and Therapeutics, University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA 15261, USA

Received 20 September 2012; Accepted 16 October 2012

Academic Editors: F. Cavaliere, A. M. Japiassu, D. Makris, and A. K. Mankan

Copyright © 2013 Michael J. Armahizer et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Purpose. The purpose of this project was to compare DDI severity for clinician opinion in the context of the patient’s clinical status to the severity of proprietary databases. Methods. This was a single-center, prospective evaluation of DDIs at a large, tertiary care academic medical center in a 10-bed cardiac intensive care unit (CCU). A pharmacist identified DDIs using two proprietary databases. The physicians and pharmacists caring for the patients evaluated the DDIs for severity while incorporating their clinical knowledge of the patient. Results. A total of 61 patients were included in the evaluation and experienced 769 DDIs. The most common DDIs included: aspirin/clopidogrel, aspirin/insulin, and aspirin/furosemide. Pharmacists ranked the DDIs identically 73.8% of the time, compared to the physicians who agreed 42.2% of the time. Pharmacists agreed with the more severe proprietary database scores for 14.8% of DDIs versus physicians at 7.3%. Overall, clinicians agreed with the proprietary database 20.6% of the time while clinicians ranked the DDIs lower than the database 77.3% of the time. Conclusions. Proprietary DDI databases generally label DDIs with a higher severity rating than bedside clinicians. Developing a DDI knowledgebase for CDSS requires consideration of the severity information source and should include the clinician.

1. Introduction

Adverse drug events (ADEs) may occur due to medication errors (MEs), pharmacokinetic alterations, drug-drug interactions (DDIs) and drug-disease interactions, with research revealing that both the incidence and severity of ADEs are heightened in intensive care unit (ICU) patients [1, 2]. An ADE is defined as an undesirable clinical manifestation that is consequent to and caused by the administration of medications, as well as events due to error [3]. Drug-drug interactions contribute to ADEs when the efficacy or toxicity of a medication is altered by the administration of another substance and causes a reduction in the intended therapeutic effect or increase in the expected toxicity profile [4]. Automated clinical decision support systems (CDSSs) within most computerized prescriber order entry (CPOE) programs have contributed to error reduction by prospectively identifying potential medication allergies, interactions, or overdoses and may reduce the incidence of DDIs by 50% [5, 6]. Notably, only 1 out of 15 interactions in a cardiac ICU is considered major or contraindicated by proprietary DDI databases and excessive DDI alerting may cause “alert fatigue” [7].

Alert fatigue is defined as a desensitization of clinicians to the overwhelming number of DDI notifications that occur during medication order and verification and contributes to the override of between 49 to 96% of alerts [5, 6, 812]. Only 11% of DDI alerts generated by CDSS are considered to be useful; however, 69% of useful alerts lead to a change in clinical management [13]. Clinical decision support systems must be further modified in an effort to improve the delivery of clinically relevant, useful information and decrease the number of unnecessary and invalid alerts.

Several methods to improve alerts have been suggested, such as refining alert specificity by linking alerts to clinically relevant patient parameters and customizing the system to include only a limited number of clinically important alerts [5, 10, 1618]. Tiering of alert systems based on the perceived severity of the DDI could also be used [19]. The use of tiering systems has demonstrated a higher rate of compliance with DDI alerts; however, the optimal rationale for determining the severity of DDIs within tiering systems has not been fully elucidated. While these suggestions appear logical, there is limited data testing their benefit, and to our knowledge no evaluation of DDI severity has been completed in the presence of patient specific clinical data. The overarching goal of this quality improvement project is to improve the institution’s DDI CDSS by identifying clinically relevant DDIs. The primary objective is to compare DDI severity based on clinician opinion and proprietary database determinations in the context of the patient’s clinical status.

2. Methods

This was a single-center, prospective evaluation of potential DDIs at a large, tertiary care academic medical center. Data collection was conducted between October 11, 2010 and November 5, 2010 in a 10-bed cardiac intensive care unit (CCU). The protocol was approved by the institution’s Quality Improvement Committee.

Each patient admitted to the CCU during the study period was assessed for potential DDIs. Patients’ medication administration records were reviewed for all DDIs on the first day of the study for all patients, and subsequent patient admissions were reviewed on the day of arrival to the CCU. After the initial medication record review, additional potential DDIs were identified daily when a new drug was ordered. All medications, including one-time orders and as needed orders, were assessed. Drug-drug interactions that occurred during the weekend and evening were evaluated by the participants on Monday and the following day, respectively. Patients were followed throughout their entire stay in the CCU.

A clinical pharmacist (MJA) rounded with the patient care team and generated patient-specific DDI reports using Micromedex and Lexicomp drug interaction software and identified all potential DDIs involving medications that were currently prescribed to patients in the CCU [14, 15]. Individual DDIs were assessed only once for each patient during the study period. The list of potential DDIs with the interaction mechanism was provided to the physicians (attending and fellow) caring for the patients, the clinical pharmacist rounding with the team (MJA), and a second nonrounding clinical pharmacist (AA) who was verifying the medication orders. A sample of this DDI report is included in Table 1. The clinical pharmacists had both completed pharmacy residencies in pharmacy practice and critical care pharmacy, while the physicians were both cardiologists. The physicians and pharmacists were asked to rate the severity of each potential DDI while incorporating their clinical knowledge of the patient. The rating scale provided to the clinicians utilized rankings ranging from A to D and X and is detailed in Table 2 [14, 15]. The severity scores for all potential DDIs assessed by multiple clinicians, and those assigned to the DDIs by proprietary databases (Micromedex and Lexicomp) were compared.

Table 1: Drug-drug interaction example.
Table 2: Drug-drug interaction rating scale [14, 15].

3. Results

A total of 61 patients were included in the evaluation, and 769 potential DDIs were identified, of which 419 were unique DDIs (i.e., occurred only once). Discrepancies between the proprietary databases were noted, with Lexicomp identifying 688 DDIs and Micromedex identifying 435 DDIs. Simultaneous identification of DDIs by both databases occurred for only 353 listed interactions. Among the interactions identified by the databases, discrepancies were noted in relation to the severity rating (Table 3). The most commonly database-identified DDIs included: aspirin and clopidogrel ( , 2.7%), aspirin and insulin ( , 2.7%) and aspirin and furosemide ( , 2.5%) (Table 4).

Table 3: Drug-drug interactions by severity.
Table 4: Most common drug-drug interactions identified.

The number of potential DDIs evaluated by each clinician differed due to alternating times of direct patient care being provided by each clinician, with pharmacists 1 and 2 evaluating 769 potential DDIs, physician 1 evaluating 240 potential DDIs and physician 2 evaluating 575 potential DDIs. Both pharmacists evaluated all 769 potential DDIs while both physicians evaluated only 192 potential DDIs. Interaction severity agreement differed between the proprietary databases and evaluators (Figure 1), with Micromedex and Lexicomp agreeing for 39.4% of interactions, pharmacists agreeing for 73.8% of interactions and physicians agreeing for 42.2% of interactions. Pharmacists agreed with each other statistically more frequently than physicians agreed with each other ( ; ). All evaluators agreed on the severity rating only 17.7% of the time, while pharmacists agreed with the Micromedex and Lexicomp database rating 8.5% and 24.3% of the time, respectively, while physicians agreed with the Micromedex and Lexicomp database rating 5.2% and 6.8% of the time, respectively. Agreement with Micromedex was not statistically different between pharmacists and physicians ( ; ); however, pharmacists agreed more frequently with Lexicomp than physicians ( ; ). Also, the clinician evaluations were compared to the more severe database rating provided for each interaction. Pharmacists agreed with the more severe proprietary database rating for 14.8% of potential DDIs versus physicians at 7.3% (Table 5).

Table 5: Drug-drug interaction agreement between evaluators.
Figure 1: Drug-drug interaction severity by all sources.

Furthermore, an evaluation of DDI severity agreement was conducted as illustrated in Table 6. Clinicians agreed with the severity rating from the proprietary database 20.6% of the time while ranking the DDI less severe than the database 77.3% of the time and more severe than the database only 2.1% of the time.

Table 6: Drug-drug interaction severity agreement.

Finally, an evaluation of contraindicated DDIs was conducted to determine their potential clinical relevance. A total of five (0.7%) contraindicated DDIs were discovered during the evaluation (Lexicomp: 4, Micromedex: 1). Among the five contraindicated DDIs, two were rated as category B (minor severity/no action needed) and three as category C (moderate severity/monitor therapy) by the majority of the evaluators, with the other interactions having discrepant evaluator ratings (Table 7).

Table 7: Clinician severity rating of contraindicated drug-drug interactions.

4. Discussion

Quite similar to ADEs, DDIs are complicated to evaluate, especially when attempting to decipher severity. The overall agreement between 4 healthcare professionals in our study was 17%, which is the same frequency of agreement for 5 healthcare professionals in their assessment of ADEs [20]. While previous comparisons of DDI severity between databases has been demonstrated and proprietary databases have been compared to clinician opinion, to our knowledge this has not been done in the context of the patients’ condition [7, 18, 21, 22].

A major finding in this study is noting that the severity of DDIs is consistently over interpreted by proprietary databases compared to a healthcare professional’s opinion in the context of the patient care data. This may be due, in part, to the clinician’s understanding of the patient’s medical problems and the necessity to treat patients with a specific drug combination while monitoring for ADRs caused by that therapy, whereas the databases are simply reporting all DDIs that may occur. Among the contraindicated DDIs found, none seemed clinically relevant based on the administration route and the ADRs associated with their concomitant use. This brings up an interesting question: Should proprietary DDI database rankings be modified or is it important that these warnings are given to providers in order to promote safe medication use and allow clinicians to make the appropriate risk versus benefit assessment?

Similar ratings between the pharmacist reviewers and physician reviewers were noted in this study. This is most likely explained by the differing levels of exposure and training received by each provider group. At our institution, all DDI alerts are reported to pharmacists at the time of order entry, whereas physicians only see a small number of DDI alerts. Additionally, many pharmacists receive training regarding DDIs during formal education, which may contribute to an increased familiarity. The background of the pharmacists involved in this study may have contributed to their ranking of the DDIs, in that their previous training made them more aware of DDIs and their impact on patients. Additionally, the cardiologists involved in this study have an extensive knowledge of medications typically used in the CCU and understand the DDIs that can and do occur in their patients.

In this prospective evaluation, the most commonly identified DDIs were typical based on the patient population studied. It is no surprise that interactions involving aspirin, clopidogrel, insulin, and furosemide were most commonly identified, as these medications are administered to almost every patient treated in the CCU. Approximately 12 DDIs were identified per patient during the CCU stay. Drug-drug interactions have been associated with an increase in patient morbidity and mortality [23]. Moura and colleagues found that the median ICU length of stay (LOS) among patients with at least one DDI was significantly longer than patients not experiencing DDIs (12 days versus 5 days), while Reis and colleagues showed that 7% of ADEs corresponded to DDIs amongst a cohort of patients treated in an intensive care unit [24, 25]. The ramifications of unresolved DDIs can be far reaching, especially amongst critically ill patients. On the contrary, some DDIs must be tolerated due to the risk-benefit assessment associated with the treatment in question (i.e., bleeding risk versus in-stent thrombosis risk associated with concomitant aspirin and clopidogrel in patients post-stent), therefore limiting the DDI significance.

Development of a DDI knowledgebase requires careful consideration of the source of the severity information to avoid excessive alerts and create clinically meaningful alerts. DDI systems provide evidence behind most of their alerts but clinicians must be aware that some alerts are based on theoretical interactions that utilize known CYP enzyme system inhibitors, inducers, and substrates to determine potential DDIs. Many of these DDIs do not have clinically relevant case reports to substantiate the hypothetical interaction [26].

Drug-drug interaction knowledgebase development should consider patient-specific information, such as patient demographics, risk factors for the development of DDIs, laboratory values, radiology reports, electrocardiogram information, and hemodynamic values. These systems should also be tailored based on the clinician and patient care environment. Physician and pharmacist alerts should differ to help in providing the most clinically relevant information to each provider. Additionally, alerts could be tailored based on patients being treated in the ICU and non-ICU patient care areas. The legal ramifications of these differences must be explored to determine the most appropriate manner to report differing information.

5. Limitations

This setting of this project was an academic medical center, and therefore, the results may not be extended to community hospitals. The patient population was limited to those being treated in the CCU, where specific medications are commonly used that may not be used in all ICUs, limiting the validity of this study in other environments. A differing number of alerts and patients were assessed by each evaluator with some overlap, due to the time spent on the patient care service. This could have contributed to the differing rankings of each evaluator. Additionally, the evaluations were based on the experience of only four clinicians. The rounding clinical pharmacist (MJA) routinely worked with the patient care team, and clinical judgment may have been affected by previous interactions among team members. Only two drug databases were used in the study, although these are the two most commonly used alert systems in our institution.

6. Conclusion

Knowledgebase development for CDSS should be structured to limit alert fatigue and optimize patient outcomes. This project demonstrates that in the context of patient care knowledge with the ability to assess risk benefit for drug therapy, the severity of DDIs ranked by clinicians is frequently less severe than proprietary databases. It may be best to develop a DDI knowledgebase for CDSS with clinician input and adjust alerting systems for specific patient populations.

Conflict of Interests

The authors report no conflict of interests or direct financial relationships with the commercial entities mentioned in this paper.


  1. D. J. Cullen, B. J. Sweitzer, D. W. Bates, E. Burdick, A. Edmondson, and L. L. Leape, “Preventable adverse drug events in hospitalized patients: a comparative study of intensive care and general care units,” Critical Care Medicine, vol. 25, no. 8, pp. 1289–1297, 1997. View at Publisher · View at Google Scholar · View at Scopus
  2. S. L. Kane-Gill, J. G. Kowiatek, and R. J. Weber, “A comparison of voluntarily reported medication errors in intensive care and general care units,” Quality and Safety in Health Care, vol. 19, no. 1, pp. 55–59, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. AHRQ Patient Safety Network, Agency for Healthcare Research and Quality, Rockville, Md, USA, 2012, http://psnet.ahrq.gov/popup_glossary.aspx?name=adversedrugevent.
  4. G. K. Dresser and D. G. Bailey, “A basic conceptual and practical overview of interactions with highly prescribed drugs,” Canadian Journal of Clinical Pharmacology, vol. 9, no. 4, pp. 191–198, 2002. View at Scopus
  5. H. Van Der Sijs, J. Aarts, A. Vulto, and M. Berg, “Overriding of drug safety alerts in computerized physician order entry,” Journal of the American Medical Informatics Association, vol. 13, no. 2, pp. 138–147, 2006. View at Publisher · View at Google Scholar · View at Scopus
  6. P. A. Glassman, B. Simon, P. Belperio, and A. Lanto, “Improving recognition of drug interactions benefits and barriers to using automated drug alerts,” Medical Care, vol. 40, no. 12, pp. 1161–1171, 2002. View at Publisher · View at Google Scholar · View at Scopus
  7. P. L. Smithburger, S. L. Kane-Gill, N. J. Benedict, B. A. Falcione, and A. L. Seybert, “Grading the severity of drug-drug interactions in the intensive care unit: a comparison between clinician assessment and proprietary database severity rankings,” Annals of Pharmacotherapy, vol. 44, no. 11, pp. 1718–1724, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. J. J. Cash, “Alert fatigue,” American Journal of Health-System Pharmacy, vol. 66, no. 23, pp. 2098–2101, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. J. S. Ash, D. F. Sittig, E. M. Campbell, K. P. Guappone, and R. H. Dykstra, “Some unintended consequences of clinical decision support systems,” AMIA Annual Symposium Proceedings, pp. 26–30, 2007. View at Scopus
  10. N. R. Shah, A. C. Seger, D. L. Seger et al., “Improving acceptance of computerized prescribing alerts in ambulatory care,” Journal of the American Medical Informatics Association, vol. 13, no. 1, pp. 5–11, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. T. H. Payne, W. P. Nichol, P. Hoey, and J. Savarino, “Characteristics and override rates of order checks in a practitioner order entry system,” AMIA Annual Symposium Proceedings, pp. 602–606, 2002. View at Scopus
  12. S. N. Weingart, M. Toth, D. Z. Sands, M. D. Aronson, R. B. Davis, and R. S. Phillips, “Physicians' decisions to override computerized drug alerts in primary care,” Archives of Internal Medicine, vol. 163, no. 21, pp. 2625–2631, 2003. View at Publisher · View at Google Scholar · View at Scopus
  13. J. R. Spina, P. A. Glassman, P. Belperio, R. Cader, and S. Asch, “Clinical relevance of automated drug alerts from the perspective of medical providers,” American Journal of Medical Quality, vol. 20, no. 1, pp. 7–14, 2005. View at Publisher · View at Google Scholar · View at Scopus
  14. “DRUG-REAX System (electronic version),” Thomson Reuters, Greenwood Village, Colo, USA, 2010, http://www.thomsonhc.com.
  15. Lexi-Comp (Lexi-Interact) [computer program], 2010.
  16. G. J. Kuperman, A. Bobb, T. H. Payne et al., “Medication-related clinical decision support in computerized provider order etry systems: a review,” Journal of the American Medical Informatics Association, vol. 14, no. 1, pp. 29–40, 2007. View at Publisher · View at Google Scholar · View at Scopus
  17. T. L. Humphries, N. Carroll, E. A. Chester, D. Magid, and B. Rocho, “Evaluation of an electronic critical drug interaction program coupled with active pharmacist intervention,” Annals of Pharmacotherapy, vol. 41, no. 12, pp. 1979–1985, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. P. L. Smithburger, M. S. Buckley, S. Bejian, K. Burenheide, et al., “A critical evaluation of clinical decision support for the detection of drug-drug interactions,” Expert Opinion on Drug Safety, vol. 10, no. 6, pp. 871–872, 2011. View at Publisher · View at Google Scholar
  19. M. D. Paterno, S. M. Maviglia, P. N. Gorman et al., “Tiering drug-drug interaction alerts by severity increases compliance rates,” Journal of the American Medical Informatics Association, vol. 16, no. 1, pp. 40–46, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Arimone, B. Begaud, G. Miremone-Salame, et al., “Agreement of expert judgment in causality assessment of adverse drug reactions,” European Journal of Clinical Pharmacology, vol. 61, pp. 169–173, 2005. View at Publisher · View at Google Scholar
  21. J. Abarca, D. C. Malone, E. P. Armstrong et al., “Concordance of severity ratings provided in four drug interaction compendia,” Journal of the American Pharmacists Association, vol. 44, no. 2, pp. 136–141, 2004. View at Scopus
  22. L. Juntti-Patinen and P. J. Neuvonen, “Drug-related deaths in a university central hospital,” European Journal of Clinical Pharmacology, vol. 58, no. 7, pp. 479–482, 2002. View at Publisher · View at Google Scholar · View at Scopus
  23. G. I. Kohler, S. M. Bode-Boger, R. Busse, M. Hoopmann, T. Welte, and R. H. Boger, “Drug-drug interactions in medical patients: effects of in-hospital treatment and relation to multiple drug use,” International Journal of Clinical Pharmacology and Therapeutics, vol. 38, no. 11, pp. 504–513, 2000. View at Scopus
  24. C. Moura, N. Prado, and F. Acurcio, “Potential drug-drug interactions associated with prolonged stays in the intensive care unit: a retrospective cohort study,” Clinical Drug Investigation, vol. 31, no. 5, pp. 309–316, 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. A. M. M. Reis and S. H. D. B. Cassiani, “Adverse drug events in an intensive care unit of a university hospital,” European Journal of Clinical Pharmacology, vol. 67, no. 6, pp. 625–632, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. L. Magro, U. Moretti, and R. Leone, “Epidemiology and characteristics of adverse drug reactions caused by drug-drug interactions,” Expert Opinion on Drug Safety, vol. 11, no. 1, pp. 83–94, 2012. View at Publisher · View at Google Scholar