User login
Ibrutinib proves active in high-risk CLL
Credit: Mary Ann Thompson
Single-agent ibrutinib can elicit a high response rate in patients with high-risk chronic lymphocytic leukemia (CLL), results of a phase 2 trial suggest.
The Bruton’s tyrosine kinase inhibitor prompted a 92% objective response rate in patients who had previously untreated or relapsed/refractory CLL with either 17p deletion (del 17p) or tumor protein 53 (TP53) aberrations.
Researchers reported this and other results of the trial in The Lancet Oncology.
“Ibrutinib treatment results observed in CLL patients with del 17p or TP53 aberrations are very encouraging given that these patients have a high relapse rate after chemotherapy and are in need of tolerable, effective, and durable treatment options,” said study author Mohammed Farooqui, DO, of the National Heart, Lung, and Blood Institute in Bethesda, Maryland.
He and his colleagues studied 51 patients in this trial, 35 with previously untreated CLL and 16 with relapsed or refractory CLL. Forty-seven of the patients (92%) had del 17p, and 4 patients carried the TP53 aberration but did not have del 17p.
The study’s primary endpoint was overall response rate after 24 weeks. Secondary endpoints included safety, overall survival, progression-free survival, best response, and nodal response.
The median follow-up for all patients was 24 months (15 months for the previously untreated cohort). At 24 weeks, 48 patients were evaluable for response, assessed according to the modified IWCLL 2008 criteria.
Response rates
At 24 weeks, 92% (n=44) of the 48 evaluable patients achieved an objective response. Fifty percent of all evaluable patients achieved a partial response (n=24)—55% of previously untreated patients (n=18) and 40% of relapsed/refractory patients (n=6).
As for best response, 10% of all patients achieved a complete response (n=5)—12% of previously untreated patients (n=4) and 7% of relapsed/refractory patients (n=1). And 67% of patients had a partial response (n=32)—70% of previously untreated patients (n=23) and 60% of relapsed/refractory patients (n=9).
After 8 weeks on therapy, ibrutinib was associated with a more than 50% mean reduction in tumor burden in the bone marrow (44%), lymph nodes (70%), and spleen (79%). After 24 weeks of therapy, the rates of tumor burden reduction (> 50%) increased to 83%, 93%, and 95%, respectively.
Survival and safety
The estimated progression-free survival at 24 months for all patients on an intention-to-treat basis was 82%. Forty-two of the 51 patients (82%) continued on ibrutinib treatment without disease progression.
The estimated overall survival at 24 months was 80% for all patients—84% for previously untreated patients and 74% for patients with relapsed or refractory disease.
At the final follow-up, 8 (16%) patients had died—5 (10%) from progressive disease, 2 (4%) from infection, and 1 (2%) patient with a sudden, unexplained death that may have been treatment-related.
The most common adverse events (occurring in more than 30% of all patients) potentially related to ibrutinib were arthralgia (59%), diarrhea (51%), rash (47%), nail ridging (43%), bruising (33%), and muscle spasms (31%).
The most frequent grade 3 or 4 hematologic adverse events were neutropenia (24%), anemia (14%), and thrombocytopenia (10%). The most common nonhematologic grade 3 adverse event was pneumonia, which occurred in 3 patients (6%).
Nine patients (18%) discontinued treatment. The reasons for discontinuation included disease progression in 5 patients (10%) and death for 3 patients (6%).
This research was sponsored by the Intramural Research Program of the National Heart, Lung, and Blood Institute and the National Cancer Institute; Danish Cancer Society; Novo Nordisk Foundation; National Institutes of Health Medical Research Scholars Program; and Pharmacyclics Inc.
Ibrutinib is jointly developed and commercialized by Pharmacyclics and Janssen Biotech, Inc.
Credit: Mary Ann Thompson
Single-agent ibrutinib can elicit a high response rate in patients with high-risk chronic lymphocytic leukemia (CLL), results of a phase 2 trial suggest.
The Bruton’s tyrosine kinase inhibitor prompted a 92% objective response rate in patients who had previously untreated or relapsed/refractory CLL with either 17p deletion (del 17p) or tumor protein 53 (TP53) aberrations.
Researchers reported this and other results of the trial in The Lancet Oncology.
“Ibrutinib treatment results observed in CLL patients with del 17p or TP53 aberrations are very encouraging given that these patients have a high relapse rate after chemotherapy and are in need of tolerable, effective, and durable treatment options,” said study author Mohammed Farooqui, DO, of the National Heart, Lung, and Blood Institute in Bethesda, Maryland.
He and his colleagues studied 51 patients in this trial, 35 with previously untreated CLL and 16 with relapsed or refractory CLL. Forty-seven of the patients (92%) had del 17p, and 4 patients carried the TP53 aberration but did not have del 17p.
The study’s primary endpoint was overall response rate after 24 weeks. Secondary endpoints included safety, overall survival, progression-free survival, best response, and nodal response.
The median follow-up for all patients was 24 months (15 months for the previously untreated cohort). At 24 weeks, 48 patients were evaluable for response, assessed according to the modified IWCLL 2008 criteria.
Response rates
At 24 weeks, 92% (n=44) of the 48 evaluable patients achieved an objective response. Fifty percent of all evaluable patients achieved a partial response (n=24)—55% of previously untreated patients (n=18) and 40% of relapsed/refractory patients (n=6).
As for best response, 10% of all patients achieved a complete response (n=5)—12% of previously untreated patients (n=4) and 7% of relapsed/refractory patients (n=1). And 67% of patients had a partial response (n=32)—70% of previously untreated patients (n=23) and 60% of relapsed/refractory patients (n=9).
After 8 weeks on therapy, ibrutinib was associated with a more than 50% mean reduction in tumor burden in the bone marrow (44%), lymph nodes (70%), and spleen (79%). After 24 weeks of therapy, the rates of tumor burden reduction (> 50%) increased to 83%, 93%, and 95%, respectively.
Survival and safety
The estimated progression-free survival at 24 months for all patients on an intention-to-treat basis was 82%. Forty-two of the 51 patients (82%) continued on ibrutinib treatment without disease progression.
The estimated overall survival at 24 months was 80% for all patients—84% for previously untreated patients and 74% for patients with relapsed or refractory disease.
At the final follow-up, 8 (16%) patients had died—5 (10%) from progressive disease, 2 (4%) from infection, and 1 (2%) patient with a sudden, unexplained death that may have been treatment-related.
The most common adverse events (occurring in more than 30% of all patients) potentially related to ibrutinib were arthralgia (59%), diarrhea (51%), rash (47%), nail ridging (43%), bruising (33%), and muscle spasms (31%).
The most frequent grade 3 or 4 hematologic adverse events were neutropenia (24%), anemia (14%), and thrombocytopenia (10%). The most common nonhematologic grade 3 adverse event was pneumonia, which occurred in 3 patients (6%).
Nine patients (18%) discontinued treatment. The reasons for discontinuation included disease progression in 5 patients (10%) and death for 3 patients (6%).
This research was sponsored by the Intramural Research Program of the National Heart, Lung, and Blood Institute and the National Cancer Institute; Danish Cancer Society; Novo Nordisk Foundation; National Institutes of Health Medical Research Scholars Program; and Pharmacyclics Inc.
Ibrutinib is jointly developed and commercialized by Pharmacyclics and Janssen Biotech, Inc.
Credit: Mary Ann Thompson
Single-agent ibrutinib can elicit a high response rate in patients with high-risk chronic lymphocytic leukemia (CLL), results of a phase 2 trial suggest.
The Bruton’s tyrosine kinase inhibitor prompted a 92% objective response rate in patients who had previously untreated or relapsed/refractory CLL with either 17p deletion (del 17p) or tumor protein 53 (TP53) aberrations.
Researchers reported this and other results of the trial in The Lancet Oncology.
“Ibrutinib treatment results observed in CLL patients with del 17p or TP53 aberrations are very encouraging given that these patients have a high relapse rate after chemotherapy and are in need of tolerable, effective, and durable treatment options,” said study author Mohammed Farooqui, DO, of the National Heart, Lung, and Blood Institute in Bethesda, Maryland.
He and his colleagues studied 51 patients in this trial, 35 with previously untreated CLL and 16 with relapsed or refractory CLL. Forty-seven of the patients (92%) had del 17p, and 4 patients carried the TP53 aberration but did not have del 17p.
The study’s primary endpoint was overall response rate after 24 weeks. Secondary endpoints included safety, overall survival, progression-free survival, best response, and nodal response.
The median follow-up for all patients was 24 months (15 months for the previously untreated cohort). At 24 weeks, 48 patients were evaluable for response, assessed according to the modified IWCLL 2008 criteria.
Response rates
At 24 weeks, 92% (n=44) of the 48 evaluable patients achieved an objective response. Fifty percent of all evaluable patients achieved a partial response (n=24)—55% of previously untreated patients (n=18) and 40% of relapsed/refractory patients (n=6).
As for best response, 10% of all patients achieved a complete response (n=5)—12% of previously untreated patients (n=4) and 7% of relapsed/refractory patients (n=1). And 67% of patients had a partial response (n=32)—70% of previously untreated patients (n=23) and 60% of relapsed/refractory patients (n=9).
After 8 weeks on therapy, ibrutinib was associated with a more than 50% mean reduction in tumor burden in the bone marrow (44%), lymph nodes (70%), and spleen (79%). After 24 weeks of therapy, the rates of tumor burden reduction (> 50%) increased to 83%, 93%, and 95%, respectively.
Survival and safety
The estimated progression-free survival at 24 months for all patients on an intention-to-treat basis was 82%. Forty-two of the 51 patients (82%) continued on ibrutinib treatment without disease progression.
The estimated overall survival at 24 months was 80% for all patients—84% for previously untreated patients and 74% for patients with relapsed or refractory disease.
At the final follow-up, 8 (16%) patients had died—5 (10%) from progressive disease, 2 (4%) from infection, and 1 (2%) patient with a sudden, unexplained death that may have been treatment-related.
The most common adverse events (occurring in more than 30% of all patients) potentially related to ibrutinib were arthralgia (59%), diarrhea (51%), rash (47%), nail ridging (43%), bruising (33%), and muscle spasms (31%).
The most frequent grade 3 or 4 hematologic adverse events were neutropenia (24%), anemia (14%), and thrombocytopenia (10%). The most common nonhematologic grade 3 adverse event was pneumonia, which occurred in 3 patients (6%).
Nine patients (18%) discontinued treatment. The reasons for discontinuation included disease progression in 5 patients (10%) and death for 3 patients (6%).
This research was sponsored by the Intramural Research Program of the National Heart, Lung, and Blood Institute and the National Cancer Institute; Danish Cancer Society; Novo Nordisk Foundation; National Institutes of Health Medical Research Scholars Program; and Pharmacyclics Inc.
Ibrutinib is jointly developed and commercialized by Pharmacyclics and Janssen Biotech, Inc.
Whole plant treats malaria better
artemisinin is derived
Credit: Jorge Ferreira
Preclinical research suggests that using the whole plant Artemesia annua, from which the drug artemisinin is extracted, may treat malaria more effectively than artemisinin itself.
Whole-plant treatment withstood the evolution of resistance and remained effective for up to 3 times longer than pure artemisinin.
Whole-plant therapy was also more effective in killing rodent parasites that have previously evolved resistance to pure artemisinin.
Stephen Rich, PhD, of the University of Massachusetts Amherst, and his colleagues reported these findings in PNAS.
The team previously showed that the whole-plant approach is more effective at killing rodent malaria than purified artemisinin.
In the present study, the investigators conducted a series of experiments to determine the rates at which parasites become resistant to whole-plant treatment compared to the rate with pure artemisinin, and if the whole-plant treatment can overcome resistance to pharmaceutical artemisinin.
The team chose 2 rodent malaria species for particular characteristics. They chose Plasmodium yoelii because an artemisinin-resistant strain exists and could be used to test whether the whole plant can overcome that resistance.
And they chose Plasmodium chabaudi because, among several species of rodent malaria, it most closely biologically resembles the deadliest of the 5 human malaria parasites, Plasmodium falciparum.
“Conducting these experiments in different rodent malaria species also provides a robust test of the therapy,” Dr Rich noted.
To determine the respective evolutionary rates of resistance to whole-plant therapy and artemisinin, Dr Rich and his colleagues conducted artificial evolution experiments. The goal was to compare the rates at which resistance to these two treatments arises in serial passage among wild-type parasite lines.
In this technique, parasite proliferation rates determine resistance. Resistant parasites are expected to reach a certain target level at the same time, whether treatment is present or absent. Sensitive parasite strains will grow more slowly in the presence of treatment and reach the target later than untreated strains.
The investigators found that artemisinin-treated parasites achieved stable resistance to low-dose (100 mg/kg) therapy on passage 16. Those parasites were then treated with a doubled artemisinin dose, and they became resistant to this after an additional 24 passages.
By comparison, parasites did not become resistant to even the low dose of whole-plant therapy (100 mg/kg) after 49 passages.
From this, the investigators concluded that the whole-plant therapy lasts at least 3 times longer than its artemisinin counterpart, and at least twice as long as the doubled dose of pure artemisinin.
“This is especially important given the recent reports of resistance to artemisinin in malaria-endemic regions of the world,” Dr Rich said.
He and his colleagues also tested whether dried, whole-plant therapy can overcome existing resistance to pharmaceutical artemisinin.
They fed groups of mice infected with artemisinin-resistant malaria either the whole-plant therapy or artemisinin mixed with water. Single treatments were given in low (40 mg) and high (200 mg) doses. Control groups received a mouse chow placebo.
The investigators then measured the parasite levels in the rodents’ bloodstream at 9 points after treatment began.
Mice given either the low or high dose of whole-plant therapy showed a significantly greater reduction in parasitemia than those in their respective artemisinin groups. As expected for these resistant parasites, parasitemia in mice in the low-dose artemisinin group did not differ from controls.
The investigators said consuming the whole plant may be more effective than the single purified drug because the whole plant “may constitute a naturally occurring combination therapy that augments artemisinin delivery and synergizes the drug’s activity.”
Dr Rich did note that the exact mechanisms of whole-plant therapy’s effectiveness still need to be identified. But he also said the antimalarial activity of whole-plant therapy against artemisinin-resistant parasites provides “compelling reasons to further explore the role of non-pharmaceutical forms of artemisinin to treat human malaria.”
artemisinin is derived
Credit: Jorge Ferreira
Preclinical research suggests that using the whole plant Artemesia annua, from which the drug artemisinin is extracted, may treat malaria more effectively than artemisinin itself.
Whole-plant treatment withstood the evolution of resistance and remained effective for up to 3 times longer than pure artemisinin.
Whole-plant therapy was also more effective in killing rodent parasites that have previously evolved resistance to pure artemisinin.
Stephen Rich, PhD, of the University of Massachusetts Amherst, and his colleagues reported these findings in PNAS.
The team previously showed that the whole-plant approach is more effective at killing rodent malaria than purified artemisinin.
In the present study, the investigators conducted a series of experiments to determine the rates at which parasites become resistant to whole-plant treatment compared to the rate with pure artemisinin, and if the whole-plant treatment can overcome resistance to pharmaceutical artemisinin.
The team chose 2 rodent malaria species for particular characteristics. They chose Plasmodium yoelii because an artemisinin-resistant strain exists and could be used to test whether the whole plant can overcome that resistance.
And they chose Plasmodium chabaudi because, among several species of rodent malaria, it most closely biologically resembles the deadliest of the 5 human malaria parasites, Plasmodium falciparum.
“Conducting these experiments in different rodent malaria species also provides a robust test of the therapy,” Dr Rich noted.
To determine the respective evolutionary rates of resistance to whole-plant therapy and artemisinin, Dr Rich and his colleagues conducted artificial evolution experiments. The goal was to compare the rates at which resistance to these two treatments arises in serial passage among wild-type parasite lines.
In this technique, parasite proliferation rates determine resistance. Resistant parasites are expected to reach a certain target level at the same time, whether treatment is present or absent. Sensitive parasite strains will grow more slowly in the presence of treatment and reach the target later than untreated strains.
The investigators found that artemisinin-treated parasites achieved stable resistance to low-dose (100 mg/kg) therapy on passage 16. Those parasites were then treated with a doubled artemisinin dose, and they became resistant to this after an additional 24 passages.
By comparison, parasites did not become resistant to even the low dose of whole-plant therapy (100 mg/kg) after 49 passages.
From this, the investigators concluded that the whole-plant therapy lasts at least 3 times longer than its artemisinin counterpart, and at least twice as long as the doubled dose of pure artemisinin.
“This is especially important given the recent reports of resistance to artemisinin in malaria-endemic regions of the world,” Dr Rich said.
He and his colleagues also tested whether dried, whole-plant therapy can overcome existing resistance to pharmaceutical artemisinin.
They fed groups of mice infected with artemisinin-resistant malaria either the whole-plant therapy or artemisinin mixed with water. Single treatments were given in low (40 mg) and high (200 mg) doses. Control groups received a mouse chow placebo.
The investigators then measured the parasite levels in the rodents’ bloodstream at 9 points after treatment began.
Mice given either the low or high dose of whole-plant therapy showed a significantly greater reduction in parasitemia than those in their respective artemisinin groups. As expected for these resistant parasites, parasitemia in mice in the low-dose artemisinin group did not differ from controls.
The investigators said consuming the whole plant may be more effective than the single purified drug because the whole plant “may constitute a naturally occurring combination therapy that augments artemisinin delivery and synergizes the drug’s activity.”
Dr Rich did note that the exact mechanisms of whole-plant therapy’s effectiveness still need to be identified. But he also said the antimalarial activity of whole-plant therapy against artemisinin-resistant parasites provides “compelling reasons to further explore the role of non-pharmaceutical forms of artemisinin to treat human malaria.”
artemisinin is derived
Credit: Jorge Ferreira
Preclinical research suggests that using the whole plant Artemesia annua, from which the drug artemisinin is extracted, may treat malaria more effectively than artemisinin itself.
Whole-plant treatment withstood the evolution of resistance and remained effective for up to 3 times longer than pure artemisinin.
Whole-plant therapy was also more effective in killing rodent parasites that have previously evolved resistance to pure artemisinin.
Stephen Rich, PhD, of the University of Massachusetts Amherst, and his colleagues reported these findings in PNAS.
The team previously showed that the whole-plant approach is more effective at killing rodent malaria than purified artemisinin.
In the present study, the investigators conducted a series of experiments to determine the rates at which parasites become resistant to whole-plant treatment compared to the rate with pure artemisinin, and if the whole-plant treatment can overcome resistance to pharmaceutical artemisinin.
The team chose 2 rodent malaria species for particular characteristics. They chose Plasmodium yoelii because an artemisinin-resistant strain exists and could be used to test whether the whole plant can overcome that resistance.
And they chose Plasmodium chabaudi because, among several species of rodent malaria, it most closely biologically resembles the deadliest of the 5 human malaria parasites, Plasmodium falciparum.
“Conducting these experiments in different rodent malaria species also provides a robust test of the therapy,” Dr Rich noted.
To determine the respective evolutionary rates of resistance to whole-plant therapy and artemisinin, Dr Rich and his colleagues conducted artificial evolution experiments. The goal was to compare the rates at which resistance to these two treatments arises in serial passage among wild-type parasite lines.
In this technique, parasite proliferation rates determine resistance. Resistant parasites are expected to reach a certain target level at the same time, whether treatment is present or absent. Sensitive parasite strains will grow more slowly in the presence of treatment and reach the target later than untreated strains.
The investigators found that artemisinin-treated parasites achieved stable resistance to low-dose (100 mg/kg) therapy on passage 16. Those parasites were then treated with a doubled artemisinin dose, and they became resistant to this after an additional 24 passages.
By comparison, parasites did not become resistant to even the low dose of whole-plant therapy (100 mg/kg) after 49 passages.
From this, the investigators concluded that the whole-plant therapy lasts at least 3 times longer than its artemisinin counterpart, and at least twice as long as the doubled dose of pure artemisinin.
“This is especially important given the recent reports of resistance to artemisinin in malaria-endemic regions of the world,” Dr Rich said.
He and his colleagues also tested whether dried, whole-plant therapy can overcome existing resistance to pharmaceutical artemisinin.
They fed groups of mice infected with artemisinin-resistant malaria either the whole-plant therapy or artemisinin mixed with water. Single treatments were given in low (40 mg) and high (200 mg) doses. Control groups received a mouse chow placebo.
The investigators then measured the parasite levels in the rodents’ bloodstream at 9 points after treatment began.
Mice given either the low or high dose of whole-plant therapy showed a significantly greater reduction in parasitemia than those in their respective artemisinin groups. As expected for these resistant parasites, parasitemia in mice in the low-dose artemisinin group did not differ from controls.
The investigators said consuming the whole plant may be more effective than the single purified drug because the whole plant “may constitute a naturally occurring combination therapy that augments artemisinin delivery and synergizes the drug’s activity.”
Dr Rich did note that the exact mechanisms of whole-plant therapy’s effectiveness still need to be identified. But he also said the antimalarial activity of whole-plant therapy against artemisinin-resistant parasites provides “compelling reasons to further explore the role of non-pharmaceutical forms of artemisinin to treat human malaria.”
Medication Warnings for Adults
Many computerized provider order entry (CPOE) systems suffer from having too much of a good thing. Few would question the beneficial effect of CPOE on medication order clarity, completeness, and transmission.[1, 2] When mechanisms for basic decision support have been added, however, such as allergy, interaction, and duplicate warnings, reductions in medication errors and adverse events have not been consistently achieved.[3, 4, 5, 6, 7] This is likely due in part to the fact that ordering providers override medication warnings at staggeringly high rates.[8, 9] Clinicians acknowledge that they are ignoring potentially valuable warnings,[10, 11] but suffer from alert fatigue due to the sheer number of messages, many of them judged by clinicians to be of low‐value.[11, 12]
Redesign of medication alert systems to increase their signal‐to‐noise ratio is badly needed,[13, 14, 15, 16] and will need to consider the clinical significance of alerts, their presentation, and context‐specific factors that potentially contribute to warning effectiveness.[17, 18, 19] Relatively few studies, however, have objectively looked at context factors such as the characteristics of providers, patients, medications, and warnings that are associated with provider responses to warnings,[9, 20, 21, 22, 23, 24, 25] and only 2 have studied how warning acceptance is associated with medication risk.[18, 26] We wished to explore these factors further. Warning acceptance has been shown to be higher, at least in the outpatient setting, when orders are entered by low‐volume prescribers for infrequently encountered warnings,[24] and there is some evidence that patients receive higher‐quality care during the day.[27] Significant attention has been placed in recent years on inappropriate prescribing in older patients,[28] and on creating a culture of safety in healthcare.[29] We therefore hypothesized that our providers would be more cautious, and medication warning acceptance rates would be higher, when orders were entered for patients who were older or with more complex medical problems, when they were entered during the day by caregivers who entered few orders, when the medications ordered were potentially associated with greater risk, and when the warnings themselves were infrequently encountered.
METHODS
Setting and Caregivers
Johns Hopkins Bayview Medical Center (JHBMC) is a 400‐bed academic medical center serving southeastern Baltimore, Maryland. Prescribing caregivers include residents and fellows who rotate to both JHBMC and Johns Hopkins Hospital, internal medicine hospitalists, other attending physicians (including teaching attendings for all departments, and hospitalists and clinical associates for departments other than internal medicine), and nurse practitioners and physician assistants from most JHBMC departments. Nearly 100% of patients on the surgery, obstetrics/gynecology, neurology, psychiatry, and chemical dependence services are hospitalized on units dedicated to their respective specialty, and the same is true for approximately 95% of medicine patients.
Order Entry
JHBMC began using a client‐server order entry system by MEDITECH (Westwood, MA) in July 2003. Provider order entry was phased in beginning in October 2003 and completed by the end of 2004. MEDITECH version 5.64 was being used during the study period. Medications may generate duplicate, interaction, allergy, adverse reaction, and dose warnings during a patient ordering session each time they are ordered. Duplicate warnings are generated when the same medication (no matter what route) is ordered that is either on their active medication list, was on the list in the preceding 24 hours, or that is being ordered simultaneously. A drug‐interaction database licensed from First DataBank (South San Francisco, CA) is utilized, and updated monthly, which classifies potential drug‐drug interactions as contraindicated, severe, intermediate, and mild. Those classified as contraindicated by First DataBank are included in the severe category in MEDITECH 5.64. During the study period, JHBMC's version of MEDITECH was configured so that providers were warned of potential severe and intermediate drug‐drug interactions, but not mild. No other customizations had been made. Patients' histories of allergies and other adverse responses to medications can be entered by any credentialed staff member. They are maintained together in an allergies section of the electronic medical record, but are identified as either allergy or adverse reactions at the time they are entered, and each generates its own warnings.
When more than 1 duplicate, interaction, allergy, or adverse reaction warning is generated for a particular medication, all appear listed on a single screen in identical fonts. No visual distinction is made between severe and intermediate drug‐drug interactions; for these, the category of medication ordered is followed by the category of the medication for which there is a potential interaction. A details button can be selected to learn specifically which medications are involved and the severity and nature of the potential interactions identified. In response to the warnings, providers can choose to either override them, erase the order, or replace the order by clicking 1 of 3 buttons at the bottom of the screen. Warnings are not repeated unless the medication is reordered for that patient. Dose warnings appear on a subsequent screen and are not addressed in this article.
Nurses are discouraged from entering verbal orders but do have the capacity to do so, at which time they encounter and must respond to the standard medication warnings, if any. Medical students are able to enter orders, at which time they also encounter and must respond to the standard medication warnings; their orders must then be cosigned by a licensed provider before they can be processed. Warnings encountered by nurses and medical students are not repeated at the time of cosignature by a licensed provider.
Data Collection
We collected data regarding all medication orders placed in our CPOE system from October 1, 2009 to April 20, 2010 for all adult patients. Intensive care unit (ICU) patients were excluded, in anticipation of a separate analysis. Hospitalizations under observation were also excluded. We then ran a report showing all medications that generated any number of warnings of any type (duplicate, interaction, allergy, or adverse reaction) for the same population. Warnings generated during readmissions that occurred at any point during the study period (ranging from 1 to 21 times) were excluded, because these patients likely had many, if not all, of the same medications ordered during their readmissions as during their initial hospitalization, which would unduly influence the analysis if retained.
There was wide variation in the number of warnings generated per medication and in the number of each warning type per medication that generated multiple warnings. Therefore, for ease of analysis and to ensure that we could accurately determine varying response to each individual warning type, we thereafter focused on the medications that generated single warnings during the study period. For each single warning we obtained patient name, account number, event date and time, hospital unit at the time of the event, ordered medication, ordering staff member, warning type, and staff member response to the warning (eg, override warning or erase order [accept the warning]). The response replace was used very infrequently, and therefore warnings that resulted in this response were excluded. Medications available in more than 1 form included the route of administration in their name, and from this they were categorized as parenteral or nonparenteral. All nonparenteral or parenteral forms of a given medication were grouped together as 1 medication (eg, morphine sustained release and morphine elixir were classified as a single‐medication, nonparenteral morphine). Medications were further categorized according to whether or not they were on the Institute for Safe Medication Practice (ISMP) List of High‐Alert Medications.[30]
The study was approved by the Johns Hopkins Institutional Review Board.
Analysis
We collected descriptive data about patients and providers. Age and length of stay (LOS) at the time of the event were determined based on the patients' admit date and date of birth, and grouped into quartiles. Hospital units were grouped according to which service or services they primarily served. Medications were grouped into quartiles according to the total number of warnings they generated during the study period. Warnings were dichotomously categorized according to whether they were overridden or accepted. Unpaired t tests were used to compare continuous variables for the 2 groups, and [2] tests were used to compare categorical variables. A multivariate logistic regression was then performed, using variables with a P value of <0.10 in the univariate analysis, to control for confounders and identify independent predictors of medication warning acceptance. All analyses were performed using Intercooled Stata 12 (StataCorp, College Station, TX).
RESULTS
A total of 259,656 medication orders were placed for adult non‐ICU patients during the 7‐month study period. Of those orders, 45,835 generated some number of medication warnings.[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] The median number of warnings per patient was 4 (interquartile range [IQR]=28; mean=5.9, standard deviation [SD]=6.2), with a range from 1 to 84. The median number of warnings generated per provider during the study period was 36 (IQR=6106, mean=87.4, SD=133.7), with a range of 1 to 1096.
There were 40,391 orders placed for 454 medications for adult non‐ICU patients, which generated a single‐medication warning (excluding those with the response replace, which was used 20 times) during the 7‐month study period. Data regarding the patients and providers associated with the orders generating single warnings are shown in Table 1. Most patients were on medicine units, and most orders were entered by residents. Patients' LOS at the time the orders were placed ranged from 0 to 118 days (median=1, IQR=04; mean=4.0, SD=7.2). The median number of single warnings per patient was 4 (IQR=28; mean=6.1, SD=6.5), with a range from 1 to 84. The median number of single warnings generated per provider during the study period was 15 (IQR=373; mean=61.7, SD=109.6), with a range of 1 to 1057.
No. (%) | |
---|---|
| |
Patients (N=6,646) | |
Age | |
1545 years | 2,048 (31%) |
4657 years | 1,610 (24%) |
5872 years | 1,520 (23%) |
73104 years | 1,468 (22%) |
Gender | |
Male | 2,934 (44%) |
Hospital unita | |
Medicine | 2,992 (45%) |
Surgery | 1,836 (28%) |
Neuro/psych/chem dep | 1,337 (20%) |
OB/GYN | 481 (7%) |
Caregivers (N=655) | |
Resident | 248 (38%)b |
Nurse | 154 (24%) |
Attending or other | 97 (15%) |
NP/PA | 69 (11%) |
IM hospitalist | 31 (5%) |
Fellow | 27 (4%) |
Medical student | 23 (4%) |
Pharmacist | 6 (1%) |
Patient and caregiver characteristics for the medication orders that generated single warnings are shown in Table 2. The majority of medications were nonparenteral and not on the ISMP list (Table 3). Most warnings generated were either duplicate (47%) or interaction warnings (47%). Warnings of a particular type were repeated 14.5% of the time for a particular medication and patient (from 2 to 24 times, median=2, IQR=22, mean=2.7, SD=1.4), and 9.8% of the time for a particular caregiver, medication, and patient (from 2 to 18 times, median=2, IQR=22, mean=2.4, SD=1.1).
Variable | No. of Warnings (%)a | No. of Warnings Accepted (%)a | P |
---|---|---|---|
| |||
Patient age | |||
1545 years | 10,881 (27) | 602 (5.5%) | <0.001 |
4657 years | 9,733 (24) | 382 (3.9%) | |
5872 years | 10,000 (25) | 308 (3.1%) | |
73104 years | 9,777 (24) | 262 (2.7%) | |
Patient gender | |||
Female | 23,395 (58) | 866 (3.7%) | 0.074 |
Male | 16,996 (42) | 688 (4.1%) | |
Patient length of stay | |||
<1 day | 10,721 (27) | 660 (6.2%) | <0.001 |
1 day | 10,854 (27) | 385 (3.5%) | |
24 days | 10,424 (26) | 277 (2.7%) | |
5118 days | 8,392 (21) | 232 (2.8%) | |
Patient hospital unit | |||
Medicine | 20,057 (50) | 519 (2.6%) | <0.001 |
Surgery | 10,274 (25) | 477 (4.6%) | |
Neuro/psych/chem dep | 8,279 (21) | 417 (5.0%) | |
OB/GYN | 1,781 (4) | 141 (7.9%) | |
Ordering caregiver | |||
Resident | 22,523 (56) | 700 (3.1%) | <0.001 |
NP/PA | 7,534 (19) | 369 (4.9%) | |
IM hospitalist | 5,048 (13) | 155 (3.1%) | |
Attending | 3225 (8) | 219 (6.8%) | |
Fellow | 910 (2) | 34 (3.7%) | |
Nurse | 865 (2) | 58 (6.7%) | |
Medical student | 265 (<1) | 17 (6.4%) | |
Pharmacist | 21 (<1) | 2 (9.5%) | |
Day ordered | |||
Weekday | 31,499 (78%) | 1276 (4.1%) | <0.001 |
Weekend | 8,892 (22%) | 278 (3.1%) | |
Time ordered | |||
00000559 | 4,231 (11%) | 117 (2.8%) | <0.001 |
06001159 | 11,696 (29%) | 348 (3.0%) | |
12001759 | 15,879 (39%) | 722 (4.6%) | |
18002359 | 8,585 (21%) | 367 (4.3%) | |
Administration route (no. of meds) | |||
Nonparenteral (339) | 27,086 (67%) | 956 (3.5%) | <0.001 |
Parenteral (115) | 13,305 (33%) | 598 (4.5%) | |
ISMP List of High‐Alert Medications status (no. of meds)[30] | |||
Not on ISMP list (394) | 27,503 (68%) | 1251 (4.5%) | <0.001 |
On ISMP list (60) | 12,888 (32%) | 303 (2.4%) | |
No. of warnings per med (no. of meds) | |||
11062133 (7) | 9,869 (24%) | 191 (1.9%) | <0.001 |
4681034 (13) | 10,014 (25%) | 331 (3.3%) | |
170444 (40) | 10,182 (25%) | 314 (3.1%) | |
1169 (394) | 10,326 (26%) | 718 (7.0%) | |
Warning type (no. of meds)b | |||
Duplicate (369) | 19,083 (47%) | 1041 (5.5%) | <0.001 |
Interaction (315) | 18,894 (47%) | 254 (1.3%) | |
Allergy (138) | 2,371 (6%) | 243 (10.0%) | |
Adverse reaction (14) | 43 (0.1%) | 16 (37%) |
Variable | Adjusted OR | 95% CI |
---|---|---|
| ||
Patient age | ||
1545 years | 1.00 | Reference |
4657 years | 0.89 | 0.771.02 |
5872 years | 0.85 | 0.730.99 |
73104 years | 0.91 | 0.771.08 |
Patient gender | ||
Female | 1.00 | Reference |
Male | 1.26 | 1.131.41 |
Patient length of stay | ||
<1 day | 1.00 | Reference |
1 day | 0.65 | 0.550.76 |
24 days | 0.49 | 0.420.58 |
5118 days | 0.49 | 0.410.58 |
Patient hospital unit | ||
Medicine | 1.00 | Reference |
Surgery | 1.45 | 1.251.68 |
Neuro/psych/chem dep | 1.35 | 1.151.58 |
OB/GYN | 2.43 | 1.923.08 |
Ordering caregiver | ||
Resident | 1.00 | Reference |
NP/PA | 1.63 | 1.421.88 |
IM hospitalist | 1.24 | 1.021.50 |
Attending | 1.83 | 1.542.18 |
Fellow | 1.41 | 0.982.03 |
Nurse | 1.92 | 1.442.57 |
Medical student | 1.17 | 0.701.95 |
Pharmacist | 3.08 | 0.6714.03 |
Medication factors | ||
Nonparenteral | 1.00 | Reference |
Parenteral | 1.79 | 1.592.03 |
HighAlert Medication status (no. of meds)[30] | ||
Not on ISMP list | 1.00 | Reference |
On ISMP list | 0.37 | 0.320.43 |
No. of warnings per medication | ||
11062133 | 1.00 | Reference |
4681034 | 2.30 | 1.902.79 |
170444 | 2.25 | 1.852.73 |
1169 | 4.10 | 3.424.92 |
Warning type | ||
Duplicate | 1.00 | Reference |
Interaction | 0.24 | 0.210.28 |
Allergy | 2.28 | 1.942.68 |
Adverse reaction | 9.24 | 4.5218.90 |
One thousand five hundred fifty‐four warnings were erased (ie, accepted by clinicians [4%]). In univariate analysis, only patient gender was not associated with warning acceptance. Patient age, LOS, hospital unit at the time of order entry, ordering caregiver type, day and time the medication was ordered, administration route, presence on the ISMP list, warning frequency, and warning type were all significantly associated with warning acceptance (Table 2).
Older patient age, longer LOS, presence of the medication on the ISMP list, and interaction warning type were all negatively associated with warning acceptance in multivariable analysis. Warning acceptance was positively associated with male patient gender, being on a service other than medicine, being a caregiver other than a resident, parenteral medications, lower warning frequency, and allergy or adverse reaction warning types (Table 3).
The 20 medications that generated the most single warnings are shown in Table 4. Medications on the ISMP list accounted for 8 of these top 20 medications. For most of them, duplicate and interaction warnings accounted for most of the warnings generated, except for parenteral hydromorphone, oral oxycodone, parenteral morphine, and oral hydromorphone, which each had more allergy than interaction warnings.
Medication | ISMP Listb | No. of Warnings | Duplicate, No. (%)c | Interaction, No. (%)c | Allergy, No. (%)c | Adverse Reaction, No. (%)c |
---|---|---|---|---|---|---|
| ||||||
Hydromorphone injectable | Yes | 2,133 | 1,584 (74.3) | 127 (6.0) | 422 (19.8) | |
Metoprolol | 1,432 | 550 (38.4) | 870 (60.8) | 12 (0.8) | ||
Aspirin | 1,375 | 212 (15.4) | 1,096 (79.7) | 67 (4.9) | ||
Oxycodone | Yes | 1,360 | 987 (72.6) | 364 (26.8) | 9 (0.7) | |
Potassium chloride | 1,296 | 379 (29.2) | 917 (70.8) | |||
Ondansetron injectable | 1,167 | 1,013 (86.8) | 153 (13.1) | 1 (0.1) | ||
Aspart insulin injectable | Yes | 1,106 | 643 (58.1) | 463 (41.9) | ||
Warfarin | Yes | 1,034 | 298 (28.8) | 736 (71.2) | ||
Heparin injectable | Yes | 1,030 | 205 (19.9) | 816 (79.2) | 9 (0.3) | |
Furosemide injectable | 980 | 438 (45.0) | 542 (55.3) | |||
Lisinopril | 926 | 225 (24.3) | 698 (75.4) | 3 (0.3) | ||
Acetaminophen | 860 | 686 (79.8) | 118 (13.7) | 54 (6.3) | 2 (0.2) | |
Morphine injectable | Yes | 804 | 467 (58.1) | 100 (12.4) | 233 (29.0) | 4 (0.5) |
Diazepam | 786 | 731 (93.0) | 41 (5.2) | 14 (1.8) | ||
Glargine insulin injectable | Yes | 746 | 268 (35.9) | 478 (64.1) | ||
Ibuprofen | 713 | 125 (17.5) | 529 (74.2) | 54 (7.6) | 5 (0.7) | |
Hydromorphone | Yes | 594 | 372 (62.6) | 31 (5.2) | 187 (31.5) | 4 (0.7) |
Furosemide | 586 | 273 (46.6) | 312 (53.2) | 1 (0.2) | ||
Ketorolac injectable | 487 | 39 (8.0) | 423 (86.9) | 23 (4.7) | 2 (0.4) | |
Prednisone | 468 | 166 (35.5) | 297 (63.5) | 5 (1.1) |
DISCUSSION
Medication warnings in our study were frequently overridden, particularly when encountered by residents, for patients with a long LOS and on the internal medicine service, and for medications generating the most warnings and on the ISMP list. Disturbingly, this means that potentially important warnings for medications with the highest potential for causing harm, for possibly the sickest and most complex patients, were those that were most often ignored by young physicians in training who should have had the most to gain from them. Of course, this is not entirely surprising. Despite our hope that a culture of safety would influence young physicians' actions when caring for these patients and prescribing these medications, these patients and medications are those for whom the most warnings are generated, and these physicians are the ones entering the most orders. Only 13% of the medications studied were on the ISMP list, but they generated 32% of the warnings. We controlled for number of warnings and ISMP list status, but not for warning validity. Most likely, high‐risk medications have been set up with more warnings, many of them of lower quality, in an errant but well‐intentioned effort to make them safer. If developers of CPOE systems want to gain serious traction in using decision support to promote prescribing safe medications, they must take substantial action to increase attention to important warnings and decrease the number of clinically insignificant, low‐value warnings encountered by active caregivers on a daily basis.
Only 2 prior studies, both by Seidling et al., have specifically looked at provider response to warnings for high risk medications. Interaction warnings were rarely accepted in 1,[18] as in our study; however, in contrast to our findings, warning acceptance in both studies was higher for drugs with dose‐dependent toxicity.[18, 26] The effect of physician experience on warning acceptance has been addressed in 2 prior studies. In Weingart et al., residents were more likely than staff physicians to erase medication orders when presented with allergy and interaction warnings in a primary care setting.[20] Long et al. found that physicians younger than 40 years were less likely than older physicians to accept duplicate warnings, but those who had been at the study hospital for a longer period of time were more likely to accept them.[23] The influence of patient LOS and service on warning acceptance has not previously been described. Further study is needed looking at each of these factors.
Individual hospitals tend to avoid making modifications to order entry warning systems, because monitoring and maintaining these changes is labor intensive. Some institutions may make the decision to turn off certain categories of alerts, such as intermediate interaction warnings, to minimize the noise their providers encounter. There are even tools for disabling individual alerts or groups of alerts, such as that available for purchase from our interaction database vendor.[31] However, institutions may fear litigation should an adverse event be attributed to a disabled warning.[15, 16] Clearly, a comprehensive, health system‐wide approach is warranted.[13, 15] To date, published efforts describing ways to improve the effectiveness of medication warning systems have focused on either heightening the clinical significance of alerts[14, 21, 22, 32, 33, 34, 35, 36] or altering their presentation and how providers experience them.[21, 36, 37, 38, 39, 40, 41, 42, 43] The single medication warnings our providers receive are all presented in an identical font, and presumably response to each would be different if they were better distinguished from each other. We also found that a small but significant number of warnings were repeated for a given patient and even a given provider. If the providers knew they would only be presented with warnings the first time they occurred for a given patient and medication, they might be more attuned to the remaining warnings. Previous studies describe context‐specific decision support for medication ordering[44, 45, 46]; however, only 1 has described the use of patient context factors to modify when or how warnings are presented to providers.[47] None have described tailoring allergy, duplicate, and interaction warnings according to medication or provider types. If further study confirms our findings, modulating basic warning systems according to severity of illness, provider experience, and medication risk could powerfully increase their effectiveness. Of course, this would be extremely challenging to achieve, and is likely outside the capabilities of most, if not all, CPOE systems, at least for now.
Our study has some limitations. First, it was limited to medications that generated a single warning. We did this for ease of analysis and so that we could ensure understanding of provider response to each warning type without bias from simultaneously occurring warnings; however, caregiver response to multiple warnings appearing simultaneously for a particular medication order might be quite different. Second, we did not include any assessment of the number of medications ordered by each provider type or for each patient, either of which could significantly affect provider response to warnings. Third, as previously noted, we did not include any assessment of the validity of the warnings, beyond the 4 main categories described, which could also significantly affect provider response. However, it should be noted that although the validity of interaction warnings varies significantly from 1 medication to another, the validity of duplicate, allergy, and adverse reaction warnings in the described system are essentially the same for all medications. Fourth, it is possible that providers did modify or even erase their orders even after selecting override in response to the warning; it is also possible that providers reentered the same order after choosing erase. Unfortunately auditing for actions such as these would be extremely laborious. Finally, the study was conducted at a single medical center using a single order‐entry system. The system in use at our medical center is in use at one‐third of the 6000 hospitals in the United States, though certainly not all are using our version. Even if a hospital was using the same CPOE version and interaction database as our institution, variations in patient population and local decisions modifying how the database interacts with the warning presentation system might affect reproducibility at that institution.
Commonly encountered medication warnings are overridden at extremely high rates, and in our study this was particularly so for medications on the ISMP list, when ordered by physicians in training. Warnings of little clinical significance must be identified and eliminated, the most important warnings need to be visually distinct to increase user attention, and further research should be done into the patient, provider, setting, and medication factors that affect user responses to warnings, so that they may be customized accordingly and their significance increased. Doing so will enable us to reap the maximum possible potential from our CPOE systems, and increase the CPOE's power to protect our most vulnerable patients from our most dangerous medications, particularly when cared for by our most inexperienced physicians.
Acknowledgements
The authors thank, in particular, Scott Carey, Research Informatics Manager, for assistance with data collection. Additional thanks go to Olga Sherman and Kathleen Ancinich for assistance with data collection and management.
Disclosures: This research was supported in part by the Johns Hopkins Institute for Clinical and Translational Research. All listed authors contributed substantially to the study conception and design, analysis and interpretation of data, drafting the article or revising it critically for important intellectual content, and final approval of the version to be published. No one who fulfills these criteria has been excluded from authorship. This research received no specific grant from any funding agency in the public, commercial, or not‐for‐profit sectors. The authors have no competing interests to declare.
- Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280:1311–1316. , , , et al.,
- Effects of computerized provider order entry on prescribing practices. Arch Intern Med. 2000;160:2741–2747. , , , , , .
- Effects of computerized clinician decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293:1223–1238. , , , et al.
- The effect of computerized physician order entry with clinical decision support on the rates of adverse drug events: a systematic review. J Gen Intern Med. 2008;23:451–458. , , , et al.
- The impact of computerized physician medication order entry in hospitalized patients—a systematic review. Int J Med Inform. 2008;77:365–376. , , .
- What evidence supports the use of computerized alerts and prompts to improve clinicians' prescribing behavior? J Am Med Inform Assoc. 2009;16:531–538. , , , et al.
- Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc. 2009;16:613–623. , , , , .
- Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13:138–147. , , , .
- Evaluating clinical decision support systems: monitoring CPOE order check override rates in the Department of Veterans Affairs' Computerized Patient Record System. J Am Med Inform Assoc. 2008;15:620–626. , , , , , .
- GPs' views on computerized drug interaction alerts: questionnaire survey. J Clin Pharm Ther. 2002;27:377–382. , , .
- Clinicians' assessments of electronic medication safety alerts in ambulatory care. Arch Intern Med. 2009;169:1627–1632. , , , et al.
- A mixed method study of the merits of e‐prescribing drug alerts in primary care. J Gen Intern Med. 2008;23:442–446. , , , , .
- CPOE and clinical decision support in hospitals: getting the benefits: comment on “Unintended effects of a computerized physician order entry nearly hard‐stop alert to prevent a drug interaction.” Arch Intern Med. 2010;170:1583–1584. .
- Critical drug‐drug interactions for use in electronic health records systems with computerized physician order entry: review of leading approaches. J Patient Saf. 2011;7:61–65. , , .
- Clinical decision support systems could be modified to reduce 'alert fatigue' while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30:2310–2317. , , , , .
- Critical issues associated with drug‐drug interactions: highlights of a multistakeholder conference. Am J Health Syst Pharm. 2011;68:941–946. , , , .
- Development of a context model to prioritize drug safety alerts in CPOE systems. BMC Med Inform Decis Mak. 2011;11:35. , , , , , .
- Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc. 2011;18:479–484. , , , et al.
- How to improve the delivery of medication alerts within computerized physician order entry systems: an international Delphi study. J Am Med Inform Assoc. 2011;18:760–766. , , , .
- Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163:2625–2631. , , , , , .
- Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006;13:5–11. , , , et al.
- Optimizing the acceptance of medication‐based alerts by physicians during CPOE implementation in a community hospital environment. AMIA Annu Symp Proc. 2007:701–705. , , , .
- The use of a CPOE log for the analysis of physicians' behavior when responding to drug‐duplication reminders. Int J Med Inform. 2008;77:499–506. , , , .
- Overrides of medication alerts in ambulatory care. Arch Intern Med. 2009;169:305–311. , , , et al.
- Drug safety alert generation and overriding in a large Dutch university medical centre. Pharmacoepidemiol Drug Saf. 2009;18:941–947. , , , , , .
- Patient‐specific electronic decision support reduces prescription of excessive doses. Qual Saf Health Care. 2010;19:e15. , , , et al.
- Survival from in‐hospital cardiac arrest during nights and weekends. JAMA. 2008;299:785–792. , , , et al.
- Managing medications in clinically complex elders: “There's got to be a happy medium.” JAMA. 2010;304:1592–1601. , .
- Agency for Healthcare Research and Quality. Safety culture. Available at: http://psnet.ahrq.gov/primer.aspx?primerID=5. Accessed October 29, 2013.
- Institute for Safe Medication Practice. List of High‐Alert Medications. Available at: http://www.ismp.org/Tools/highalertmedications.pdf. Accessed June 18, 2013.
- First Databank. FDB AlertSpace. Available at: http://www.fdbhealth.com/solutions/fdb‐alertspace. Accessed July 3, 2014.
- Improving allergy alerting in a computerized physician order entry system. Proc AMIA Symp. 2000:2–6. , , , et al.
- A clinical data warehouse‐based process for refining medication orders alerts. J Am Med Inform Assoc. 2012;19:782–785. , , , , , .
- Drug‐drug interactions that should be non‐interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc. 2013;20:489–493. , , , et al.
- High‐priority drug‐drug interactions for use in electronic health records. J Am Med Inform Assoc. 2012;19:735–743. , , , et al.
- Design of decision support interventions for medication prescribing. Int J Med Inform. 2013;82:492–503. , , , , .
- A randomized trial of the effectiveness of on‐demand versus computer‐triggered drug decision support in primary care. J Am Med Inform Assoc. 2008;15:430–438. , , , et al.
- Tiering drug‐drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc. 2009;16:40–46. , , , et al.
- A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc. 2010;17:493–501. , , , et al.
- Unintended effects of a computerized physician order entry nearly hard‐stop alert to prevent a drug interaction: a randomized controlled trial. Arch Intern Med. 2010;170:1578–1583. , , , et al.
- Randomized clinical trial of a customized electronic alert requiring an affirmative response compared to a control group receiving a commercial passive CPOE alert: NSAID—warfarin co‐prescribing as a test case. J Am Med Inform Assoc. 2010;17:411–415. , , , , , .
- Making electronic prescribing alerts more effective: scenario‐based experimental study in junior doctors. J Am Med Inform Assoc. 2011;18:789–798. , , , , .
- Development and preliminary evidence for the validity of an instrument assessing implementation of human‐factors principles in medication‐related decision‐support systems—I‐MeDeSA. J Am Med Inform Assoc. 2011;18(suppl 1):i62–i72. , , , et al.
- Medication‐related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14:29–40. , , , et al.
- Physicians' perceptions on the usefulness of contextual information for prioritizing and presenting alerts in Computerized Physician Order Entry systems. BMC Med Inform Decis Mak. 2012;12:111. , , , et al.
- Computerized clinical decision support systems for drug prescribing and management: a decision‐maker‐researcher partnership systematic review. Implement Sci. 2011;6:89. , , , et al.
- A successful model and visual design for creating context‐aware drug‐drug interaction alerts. AMIA Annu Symp Proc. 2011;2011:339–348. , .
Many computerized provider order entry (CPOE) systems suffer from having too much of a good thing. Few would question the beneficial effect of CPOE on medication order clarity, completeness, and transmission.[1, 2] When mechanisms for basic decision support have been added, however, such as allergy, interaction, and duplicate warnings, reductions in medication errors and adverse events have not been consistently achieved.[3, 4, 5, 6, 7] This is likely due in part to the fact that ordering providers override medication warnings at staggeringly high rates.[8, 9] Clinicians acknowledge that they are ignoring potentially valuable warnings,[10, 11] but suffer from alert fatigue due to the sheer number of messages, many of them judged by clinicians to be of low‐value.[11, 12]
Redesign of medication alert systems to increase their signal‐to‐noise ratio is badly needed,[13, 14, 15, 16] and will need to consider the clinical significance of alerts, their presentation, and context‐specific factors that potentially contribute to warning effectiveness.[17, 18, 19] Relatively few studies, however, have objectively looked at context factors such as the characteristics of providers, patients, medications, and warnings that are associated with provider responses to warnings,[9, 20, 21, 22, 23, 24, 25] and only 2 have studied how warning acceptance is associated with medication risk.[18, 26] We wished to explore these factors further. Warning acceptance has been shown to be higher, at least in the outpatient setting, when orders are entered by low‐volume prescribers for infrequently encountered warnings,[24] and there is some evidence that patients receive higher‐quality care during the day.[27] Significant attention has been placed in recent years on inappropriate prescribing in older patients,[28] and on creating a culture of safety in healthcare.[29] We therefore hypothesized that our providers would be more cautious, and medication warning acceptance rates would be higher, when orders were entered for patients who were older or with more complex medical problems, when they were entered during the day by caregivers who entered few orders, when the medications ordered were potentially associated with greater risk, and when the warnings themselves were infrequently encountered.
METHODS
Setting and Caregivers
Johns Hopkins Bayview Medical Center (JHBMC) is a 400‐bed academic medical center serving southeastern Baltimore, Maryland. Prescribing caregivers include residents and fellows who rotate to both JHBMC and Johns Hopkins Hospital, internal medicine hospitalists, other attending physicians (including teaching attendings for all departments, and hospitalists and clinical associates for departments other than internal medicine), and nurse practitioners and physician assistants from most JHBMC departments. Nearly 100% of patients on the surgery, obstetrics/gynecology, neurology, psychiatry, and chemical dependence services are hospitalized on units dedicated to their respective specialty, and the same is true for approximately 95% of medicine patients.
Order Entry
JHBMC began using a client‐server order entry system by MEDITECH (Westwood, MA) in July 2003. Provider order entry was phased in beginning in October 2003 and completed by the end of 2004. MEDITECH version 5.64 was being used during the study period. Medications may generate duplicate, interaction, allergy, adverse reaction, and dose warnings during a patient ordering session each time they are ordered. Duplicate warnings are generated when the same medication (no matter what route) is ordered that is either on their active medication list, was on the list in the preceding 24 hours, or that is being ordered simultaneously. A drug‐interaction database licensed from First DataBank (South San Francisco, CA) is utilized, and updated monthly, which classifies potential drug‐drug interactions as contraindicated, severe, intermediate, and mild. Those classified as contraindicated by First DataBank are included in the severe category in MEDITECH 5.64. During the study period, JHBMC's version of MEDITECH was configured so that providers were warned of potential severe and intermediate drug‐drug interactions, but not mild. No other customizations had been made. Patients' histories of allergies and other adverse responses to medications can be entered by any credentialed staff member. They are maintained together in an allergies section of the electronic medical record, but are identified as either allergy or adverse reactions at the time they are entered, and each generates its own warnings.
When more than 1 duplicate, interaction, allergy, or adverse reaction warning is generated for a particular medication, all appear listed on a single screen in identical fonts. No visual distinction is made between severe and intermediate drug‐drug interactions; for these, the category of medication ordered is followed by the category of the medication for which there is a potential interaction. A details button can be selected to learn specifically which medications are involved and the severity and nature of the potential interactions identified. In response to the warnings, providers can choose to either override them, erase the order, or replace the order by clicking 1 of 3 buttons at the bottom of the screen. Warnings are not repeated unless the medication is reordered for that patient. Dose warnings appear on a subsequent screen and are not addressed in this article.
Nurses are discouraged from entering verbal orders but do have the capacity to do so, at which time they encounter and must respond to the standard medication warnings, if any. Medical students are able to enter orders, at which time they also encounter and must respond to the standard medication warnings; their orders must then be cosigned by a licensed provider before they can be processed. Warnings encountered by nurses and medical students are not repeated at the time of cosignature by a licensed provider.
Data Collection
We collected data regarding all medication orders placed in our CPOE system from October 1, 2009 to April 20, 2010 for all adult patients. Intensive care unit (ICU) patients were excluded, in anticipation of a separate analysis. Hospitalizations under observation were also excluded. We then ran a report showing all medications that generated any number of warnings of any type (duplicate, interaction, allergy, or adverse reaction) for the same population. Warnings generated during readmissions that occurred at any point during the study period (ranging from 1 to 21 times) were excluded, because these patients likely had many, if not all, of the same medications ordered during their readmissions as during their initial hospitalization, which would unduly influence the analysis if retained.
There was wide variation in the number of warnings generated per medication and in the number of each warning type per medication that generated multiple warnings. Therefore, for ease of analysis and to ensure that we could accurately determine varying response to each individual warning type, we thereafter focused on the medications that generated single warnings during the study period. For each single warning we obtained patient name, account number, event date and time, hospital unit at the time of the event, ordered medication, ordering staff member, warning type, and staff member response to the warning (eg, override warning or erase order [accept the warning]). The response replace was used very infrequently, and therefore warnings that resulted in this response were excluded. Medications available in more than 1 form included the route of administration in their name, and from this they were categorized as parenteral or nonparenteral. All nonparenteral or parenteral forms of a given medication were grouped together as 1 medication (eg, morphine sustained release and morphine elixir were classified as a single‐medication, nonparenteral morphine). Medications were further categorized according to whether or not they were on the Institute for Safe Medication Practice (ISMP) List of High‐Alert Medications.[30]
The study was approved by the Johns Hopkins Institutional Review Board.
Analysis
We collected descriptive data about patients and providers. Age and length of stay (LOS) at the time of the event were determined based on the patients' admit date and date of birth, and grouped into quartiles. Hospital units were grouped according to which service or services they primarily served. Medications were grouped into quartiles according to the total number of warnings they generated during the study period. Warnings were dichotomously categorized according to whether they were overridden or accepted. Unpaired t tests were used to compare continuous variables for the 2 groups, and [2] tests were used to compare categorical variables. A multivariate logistic regression was then performed, using variables with a P value of <0.10 in the univariate analysis, to control for confounders and identify independent predictors of medication warning acceptance. All analyses were performed using Intercooled Stata 12 (StataCorp, College Station, TX).
RESULTS
A total of 259,656 medication orders were placed for adult non‐ICU patients during the 7‐month study period. Of those orders, 45,835 generated some number of medication warnings.[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] The median number of warnings per patient was 4 (interquartile range [IQR]=28; mean=5.9, standard deviation [SD]=6.2), with a range from 1 to 84. The median number of warnings generated per provider during the study period was 36 (IQR=6106, mean=87.4, SD=133.7), with a range of 1 to 1096.
There were 40,391 orders placed for 454 medications for adult non‐ICU patients, which generated a single‐medication warning (excluding those with the response replace, which was used 20 times) during the 7‐month study period. Data regarding the patients and providers associated with the orders generating single warnings are shown in Table 1. Most patients were on medicine units, and most orders were entered by residents. Patients' LOS at the time the orders were placed ranged from 0 to 118 days (median=1, IQR=04; mean=4.0, SD=7.2). The median number of single warnings per patient was 4 (IQR=28; mean=6.1, SD=6.5), with a range from 1 to 84. The median number of single warnings generated per provider during the study period was 15 (IQR=373; mean=61.7, SD=109.6), with a range of 1 to 1057.
No. (%) | |
---|---|
| |
Patients (N=6,646) | |
Age | |
1545 years | 2,048 (31%) |
4657 years | 1,610 (24%) |
5872 years | 1,520 (23%) |
73104 years | 1,468 (22%) |
Gender | |
Male | 2,934 (44%) |
Hospital unita | |
Medicine | 2,992 (45%) |
Surgery | 1,836 (28%) |
Neuro/psych/chem dep | 1,337 (20%) |
OB/GYN | 481 (7%) |
Caregivers (N=655) | |
Resident | 248 (38%)b |
Nurse | 154 (24%) |
Attending or other | 97 (15%) |
NP/PA | 69 (11%) |
IM hospitalist | 31 (5%) |
Fellow | 27 (4%) |
Medical student | 23 (4%) |
Pharmacist | 6 (1%) |
Patient and caregiver characteristics for the medication orders that generated single warnings are shown in Table 2. The majority of medications were nonparenteral and not on the ISMP list (Table 3). Most warnings generated were either duplicate (47%) or interaction warnings (47%). Warnings of a particular type were repeated 14.5% of the time for a particular medication and patient (from 2 to 24 times, median=2, IQR=22, mean=2.7, SD=1.4), and 9.8% of the time for a particular caregiver, medication, and patient (from 2 to 18 times, median=2, IQR=22, mean=2.4, SD=1.1).
Variable | No. of Warnings (%)a | No. of Warnings Accepted (%)a | P |
---|---|---|---|
| |||
Patient age | |||
1545 years | 10,881 (27) | 602 (5.5%) | <0.001 |
4657 years | 9,733 (24) | 382 (3.9%) | |
5872 years | 10,000 (25) | 308 (3.1%) | |
73104 years | 9,777 (24) | 262 (2.7%) | |
Patient gender | |||
Female | 23,395 (58) | 866 (3.7%) | 0.074 |
Male | 16,996 (42) | 688 (4.1%) | |
Patient length of stay | |||
<1 day | 10,721 (27) | 660 (6.2%) | <0.001 |
1 day | 10,854 (27) | 385 (3.5%) | |
24 days | 10,424 (26) | 277 (2.7%) | |
5118 days | 8,392 (21) | 232 (2.8%) | |
Patient hospital unit | |||
Medicine | 20,057 (50) | 519 (2.6%) | <0.001 |
Surgery | 10,274 (25) | 477 (4.6%) | |
Neuro/psych/chem dep | 8,279 (21) | 417 (5.0%) | |
OB/GYN | 1,781 (4) | 141 (7.9%) | |
Ordering caregiver | |||
Resident | 22,523 (56) | 700 (3.1%) | <0.001 |
NP/PA | 7,534 (19) | 369 (4.9%) | |
IM hospitalist | 5,048 (13) | 155 (3.1%) | |
Attending | 3225 (8) | 219 (6.8%) | |
Fellow | 910 (2) | 34 (3.7%) | |
Nurse | 865 (2) | 58 (6.7%) | |
Medical student | 265 (<1) | 17 (6.4%) | |
Pharmacist | 21 (<1) | 2 (9.5%) | |
Day ordered | |||
Weekday | 31,499 (78%) | 1276 (4.1%) | <0.001 |
Weekend | 8,892 (22%) | 278 (3.1%) | |
Time ordered | |||
00000559 | 4,231 (11%) | 117 (2.8%) | <0.001 |
06001159 | 11,696 (29%) | 348 (3.0%) | |
12001759 | 15,879 (39%) | 722 (4.6%) | |
18002359 | 8,585 (21%) | 367 (4.3%) | |
Administration route (no. of meds) | |||
Nonparenteral (339) | 27,086 (67%) | 956 (3.5%) | <0.001 |
Parenteral (115) | 13,305 (33%) | 598 (4.5%) | |
ISMP List of High‐Alert Medications status (no. of meds)[30] | |||
Not on ISMP list (394) | 27,503 (68%) | 1251 (4.5%) | <0.001 |
On ISMP list (60) | 12,888 (32%) | 303 (2.4%) | |
No. of warnings per med (no. of meds) | |||
11062133 (7) | 9,869 (24%) | 191 (1.9%) | <0.001 |
4681034 (13) | 10,014 (25%) | 331 (3.3%) | |
170444 (40) | 10,182 (25%) | 314 (3.1%) | |
1169 (394) | 10,326 (26%) | 718 (7.0%) | |
Warning type (no. of meds)b | |||
Duplicate (369) | 19,083 (47%) | 1041 (5.5%) | <0.001 |
Interaction (315) | 18,894 (47%) | 254 (1.3%) | |
Allergy (138) | 2,371 (6%) | 243 (10.0%) | |
Adverse reaction (14) | 43 (0.1%) | 16 (37%) |
Variable | Adjusted OR | 95% CI |
---|---|---|
| ||
Patient age | ||
1545 years | 1.00 | Reference |
4657 years | 0.89 | 0.771.02 |
5872 years | 0.85 | 0.730.99 |
73104 years | 0.91 | 0.771.08 |
Patient gender | ||
Female | 1.00 | Reference |
Male | 1.26 | 1.131.41 |
Patient length of stay | ||
<1 day | 1.00 | Reference |
1 day | 0.65 | 0.550.76 |
24 days | 0.49 | 0.420.58 |
5118 days | 0.49 | 0.410.58 |
Patient hospital unit | ||
Medicine | 1.00 | Reference |
Surgery | 1.45 | 1.251.68 |
Neuro/psych/chem dep | 1.35 | 1.151.58 |
OB/GYN | 2.43 | 1.923.08 |
Ordering caregiver | ||
Resident | 1.00 | Reference |
NP/PA | 1.63 | 1.421.88 |
IM hospitalist | 1.24 | 1.021.50 |
Attending | 1.83 | 1.542.18 |
Fellow | 1.41 | 0.982.03 |
Nurse | 1.92 | 1.442.57 |
Medical student | 1.17 | 0.701.95 |
Pharmacist | 3.08 | 0.6714.03 |
Medication factors | ||
Nonparenteral | 1.00 | Reference |
Parenteral | 1.79 | 1.592.03 |
HighAlert Medication status (no. of meds)[30] | ||
Not on ISMP list | 1.00 | Reference |
On ISMP list | 0.37 | 0.320.43 |
No. of warnings per medication | ||
11062133 | 1.00 | Reference |
4681034 | 2.30 | 1.902.79 |
170444 | 2.25 | 1.852.73 |
1169 | 4.10 | 3.424.92 |
Warning type | ||
Duplicate | 1.00 | Reference |
Interaction | 0.24 | 0.210.28 |
Allergy | 2.28 | 1.942.68 |
Adverse reaction | 9.24 | 4.5218.90 |
One thousand five hundred fifty‐four warnings were erased (ie, accepted by clinicians [4%]). In univariate analysis, only patient gender was not associated with warning acceptance. Patient age, LOS, hospital unit at the time of order entry, ordering caregiver type, day and time the medication was ordered, administration route, presence on the ISMP list, warning frequency, and warning type were all significantly associated with warning acceptance (Table 2).
Older patient age, longer LOS, presence of the medication on the ISMP list, and interaction warning type were all negatively associated with warning acceptance in multivariable analysis. Warning acceptance was positively associated with male patient gender, being on a service other than medicine, being a caregiver other than a resident, parenteral medications, lower warning frequency, and allergy or adverse reaction warning types (Table 3).
The 20 medications that generated the most single warnings are shown in Table 4. Medications on the ISMP list accounted for 8 of these top 20 medications. For most of them, duplicate and interaction warnings accounted for most of the warnings generated, except for parenteral hydromorphone, oral oxycodone, parenteral morphine, and oral hydromorphone, which each had more allergy than interaction warnings.
Medication | ISMP Listb | No. of Warnings | Duplicate, No. (%)c | Interaction, No. (%)c | Allergy, No. (%)c | Adverse Reaction, No. (%)c |
---|---|---|---|---|---|---|
| ||||||
Hydromorphone injectable | Yes | 2,133 | 1,584 (74.3) | 127 (6.0) | 422 (19.8) | |
Metoprolol | 1,432 | 550 (38.4) | 870 (60.8) | 12 (0.8) | ||
Aspirin | 1,375 | 212 (15.4) | 1,096 (79.7) | 67 (4.9) | ||
Oxycodone | Yes | 1,360 | 987 (72.6) | 364 (26.8) | 9 (0.7) | |
Potassium chloride | 1,296 | 379 (29.2) | 917 (70.8) | |||
Ondansetron injectable | 1,167 | 1,013 (86.8) | 153 (13.1) | 1 (0.1) | ||
Aspart insulin injectable | Yes | 1,106 | 643 (58.1) | 463 (41.9) | ||
Warfarin | Yes | 1,034 | 298 (28.8) | 736 (71.2) | ||
Heparin injectable | Yes | 1,030 | 205 (19.9) | 816 (79.2) | 9 (0.3) | |
Furosemide injectable | 980 | 438 (45.0) | 542 (55.3) | |||
Lisinopril | 926 | 225 (24.3) | 698 (75.4) | 3 (0.3) | ||
Acetaminophen | 860 | 686 (79.8) | 118 (13.7) | 54 (6.3) | 2 (0.2) | |
Morphine injectable | Yes | 804 | 467 (58.1) | 100 (12.4) | 233 (29.0) | 4 (0.5) |
Diazepam | 786 | 731 (93.0) | 41 (5.2) | 14 (1.8) | ||
Glargine insulin injectable | Yes | 746 | 268 (35.9) | 478 (64.1) | ||
Ibuprofen | 713 | 125 (17.5) | 529 (74.2) | 54 (7.6) | 5 (0.7) | |
Hydromorphone | Yes | 594 | 372 (62.6) | 31 (5.2) | 187 (31.5) | 4 (0.7) |
Furosemide | 586 | 273 (46.6) | 312 (53.2) | 1 (0.2) | ||
Ketorolac injectable | 487 | 39 (8.0) | 423 (86.9) | 23 (4.7) | 2 (0.4) | |
Prednisone | 468 | 166 (35.5) | 297 (63.5) | 5 (1.1) |
DISCUSSION
Medication warnings in our study were frequently overridden, particularly when encountered by residents, for patients with a long LOS and on the internal medicine service, and for medications generating the most warnings and on the ISMP list. Disturbingly, this means that potentially important warnings for medications with the highest potential for causing harm, for possibly the sickest and most complex patients, were those that were most often ignored by young physicians in training who should have had the most to gain from them. Of course, this is not entirely surprising. Despite our hope that a culture of safety would influence young physicians' actions when caring for these patients and prescribing these medications, these patients and medications are those for whom the most warnings are generated, and these physicians are the ones entering the most orders. Only 13% of the medications studied were on the ISMP list, but they generated 32% of the warnings. We controlled for number of warnings and ISMP list status, but not for warning validity. Most likely, high‐risk medications have been set up with more warnings, many of them of lower quality, in an errant but well‐intentioned effort to make them safer. If developers of CPOE systems want to gain serious traction in using decision support to promote prescribing safe medications, they must take substantial action to increase attention to important warnings and decrease the number of clinically insignificant, low‐value warnings encountered by active caregivers on a daily basis.
Only 2 prior studies, both by Seidling et al., have specifically looked at provider response to warnings for high risk medications. Interaction warnings were rarely accepted in 1,[18] as in our study; however, in contrast to our findings, warning acceptance in both studies was higher for drugs with dose‐dependent toxicity.[18, 26] The effect of physician experience on warning acceptance has been addressed in 2 prior studies. In Weingart et al., residents were more likely than staff physicians to erase medication orders when presented with allergy and interaction warnings in a primary care setting.[20] Long et al. found that physicians younger than 40 years were less likely than older physicians to accept duplicate warnings, but those who had been at the study hospital for a longer period of time were more likely to accept them.[23] The influence of patient LOS and service on warning acceptance has not previously been described. Further study is needed looking at each of these factors.
Individual hospitals tend to avoid making modifications to order entry warning systems, because monitoring and maintaining these changes is labor intensive. Some institutions may make the decision to turn off certain categories of alerts, such as intermediate interaction warnings, to minimize the noise their providers encounter. There are even tools for disabling individual alerts or groups of alerts, such as that available for purchase from our interaction database vendor.[31] However, institutions may fear litigation should an adverse event be attributed to a disabled warning.[15, 16] Clearly, a comprehensive, health system‐wide approach is warranted.[13, 15] To date, published efforts describing ways to improve the effectiveness of medication warning systems have focused on either heightening the clinical significance of alerts[14, 21, 22, 32, 33, 34, 35, 36] or altering their presentation and how providers experience them.[21, 36, 37, 38, 39, 40, 41, 42, 43] The single medication warnings our providers receive are all presented in an identical font, and presumably response to each would be different if they were better distinguished from each other. We also found that a small but significant number of warnings were repeated for a given patient and even a given provider. If the providers knew they would only be presented with warnings the first time they occurred for a given patient and medication, they might be more attuned to the remaining warnings. Previous studies describe context‐specific decision support for medication ordering[44, 45, 46]; however, only 1 has described the use of patient context factors to modify when or how warnings are presented to providers.[47] None have described tailoring allergy, duplicate, and interaction warnings according to medication or provider types. If further study confirms our findings, modulating basic warning systems according to severity of illness, provider experience, and medication risk could powerfully increase their effectiveness. Of course, this would be extremely challenging to achieve, and is likely outside the capabilities of most, if not all, CPOE systems, at least for now.
Our study has some limitations. First, it was limited to medications that generated a single warning. We did this for ease of analysis and so that we could ensure understanding of provider response to each warning type without bias from simultaneously occurring warnings; however, caregiver response to multiple warnings appearing simultaneously for a particular medication order might be quite different. Second, we did not include any assessment of the number of medications ordered by each provider type or for each patient, either of which could significantly affect provider response to warnings. Third, as previously noted, we did not include any assessment of the validity of the warnings, beyond the 4 main categories described, which could also significantly affect provider response. However, it should be noted that although the validity of interaction warnings varies significantly from 1 medication to another, the validity of duplicate, allergy, and adverse reaction warnings in the described system are essentially the same for all medications. Fourth, it is possible that providers did modify or even erase their orders even after selecting override in response to the warning; it is also possible that providers reentered the same order after choosing erase. Unfortunately auditing for actions such as these would be extremely laborious. Finally, the study was conducted at a single medical center using a single order‐entry system. The system in use at our medical center is in use at one‐third of the 6000 hospitals in the United States, though certainly not all are using our version. Even if a hospital was using the same CPOE version and interaction database as our institution, variations in patient population and local decisions modifying how the database interacts with the warning presentation system might affect reproducibility at that institution.
Commonly encountered medication warnings are overridden at extremely high rates, and in our study this was particularly so for medications on the ISMP list, when ordered by physicians in training. Warnings of little clinical significance must be identified and eliminated, the most important warnings need to be visually distinct to increase user attention, and further research should be done into the patient, provider, setting, and medication factors that affect user responses to warnings, so that they may be customized accordingly and their significance increased. Doing so will enable us to reap the maximum possible potential from our CPOE systems, and increase the CPOE's power to protect our most vulnerable patients from our most dangerous medications, particularly when cared for by our most inexperienced physicians.
Acknowledgements
The authors thank, in particular, Scott Carey, Research Informatics Manager, for assistance with data collection. Additional thanks go to Olga Sherman and Kathleen Ancinich for assistance with data collection and management.
Disclosures: This research was supported in part by the Johns Hopkins Institute for Clinical and Translational Research. All listed authors contributed substantially to the study conception and design, analysis and interpretation of data, drafting the article or revising it critically for important intellectual content, and final approval of the version to be published. No one who fulfills these criteria has been excluded from authorship. This research received no specific grant from any funding agency in the public, commercial, or not‐for‐profit sectors. The authors have no competing interests to declare.
Many computerized provider order entry (CPOE) systems suffer from having too much of a good thing. Few would question the beneficial effect of CPOE on medication order clarity, completeness, and transmission.[1, 2] When mechanisms for basic decision support have been added, however, such as allergy, interaction, and duplicate warnings, reductions in medication errors and adverse events have not been consistently achieved.[3, 4, 5, 6, 7] This is likely due in part to the fact that ordering providers override medication warnings at staggeringly high rates.[8, 9] Clinicians acknowledge that they are ignoring potentially valuable warnings,[10, 11] but suffer from alert fatigue due to the sheer number of messages, many of them judged by clinicians to be of low‐value.[11, 12]
Redesign of medication alert systems to increase their signal‐to‐noise ratio is badly needed,[13, 14, 15, 16] and will need to consider the clinical significance of alerts, their presentation, and context‐specific factors that potentially contribute to warning effectiveness.[17, 18, 19] Relatively few studies, however, have objectively looked at context factors such as the characteristics of providers, patients, medications, and warnings that are associated with provider responses to warnings,[9, 20, 21, 22, 23, 24, 25] and only 2 have studied how warning acceptance is associated with medication risk.[18, 26] We wished to explore these factors further. Warning acceptance has been shown to be higher, at least in the outpatient setting, when orders are entered by low‐volume prescribers for infrequently encountered warnings,[24] and there is some evidence that patients receive higher‐quality care during the day.[27] Significant attention has been placed in recent years on inappropriate prescribing in older patients,[28] and on creating a culture of safety in healthcare.[29] We therefore hypothesized that our providers would be more cautious, and medication warning acceptance rates would be higher, when orders were entered for patients who were older or with more complex medical problems, when they were entered during the day by caregivers who entered few orders, when the medications ordered were potentially associated with greater risk, and when the warnings themselves were infrequently encountered.
METHODS
Setting and Caregivers
Johns Hopkins Bayview Medical Center (JHBMC) is a 400‐bed academic medical center serving southeastern Baltimore, Maryland. Prescribing caregivers include residents and fellows who rotate to both JHBMC and Johns Hopkins Hospital, internal medicine hospitalists, other attending physicians (including teaching attendings for all departments, and hospitalists and clinical associates for departments other than internal medicine), and nurse practitioners and physician assistants from most JHBMC departments. Nearly 100% of patients on the surgery, obstetrics/gynecology, neurology, psychiatry, and chemical dependence services are hospitalized on units dedicated to their respective specialty, and the same is true for approximately 95% of medicine patients.
Order Entry
JHBMC began using a client‐server order entry system by MEDITECH (Westwood, MA) in July 2003. Provider order entry was phased in beginning in October 2003 and completed by the end of 2004. MEDITECH version 5.64 was being used during the study period. Medications may generate duplicate, interaction, allergy, adverse reaction, and dose warnings during a patient ordering session each time they are ordered. Duplicate warnings are generated when the same medication (no matter what route) is ordered that is either on their active medication list, was on the list in the preceding 24 hours, or that is being ordered simultaneously. A drug‐interaction database licensed from First DataBank (South San Francisco, CA) is utilized, and updated monthly, which classifies potential drug‐drug interactions as contraindicated, severe, intermediate, and mild. Those classified as contraindicated by First DataBank are included in the severe category in MEDITECH 5.64. During the study period, JHBMC's version of MEDITECH was configured so that providers were warned of potential severe and intermediate drug‐drug interactions, but not mild. No other customizations had been made. Patients' histories of allergies and other adverse responses to medications can be entered by any credentialed staff member. They are maintained together in an allergies section of the electronic medical record, but are identified as either allergy or adverse reactions at the time they are entered, and each generates its own warnings.
When more than 1 duplicate, interaction, allergy, or adverse reaction warning is generated for a particular medication, all appear listed on a single screen in identical fonts. No visual distinction is made between severe and intermediate drug‐drug interactions; for these, the category of medication ordered is followed by the category of the medication for which there is a potential interaction. A details button can be selected to learn specifically which medications are involved and the severity and nature of the potential interactions identified. In response to the warnings, providers can choose to either override them, erase the order, or replace the order by clicking 1 of 3 buttons at the bottom of the screen. Warnings are not repeated unless the medication is reordered for that patient. Dose warnings appear on a subsequent screen and are not addressed in this article.
Nurses are discouraged from entering verbal orders but do have the capacity to do so, at which time they encounter and must respond to the standard medication warnings, if any. Medical students are able to enter orders, at which time they also encounter and must respond to the standard medication warnings; their orders must then be cosigned by a licensed provider before they can be processed. Warnings encountered by nurses and medical students are not repeated at the time of cosignature by a licensed provider.
Data Collection
We collected data regarding all medication orders placed in our CPOE system from October 1, 2009 to April 20, 2010 for all adult patients. Intensive care unit (ICU) patients were excluded, in anticipation of a separate analysis. Hospitalizations under observation were also excluded. We then ran a report showing all medications that generated any number of warnings of any type (duplicate, interaction, allergy, or adverse reaction) for the same population. Warnings generated during readmissions that occurred at any point during the study period (ranging from 1 to 21 times) were excluded, because these patients likely had many, if not all, of the same medications ordered during their readmissions as during their initial hospitalization, which would unduly influence the analysis if retained.
There was wide variation in the number of warnings generated per medication and in the number of each warning type per medication that generated multiple warnings. Therefore, for ease of analysis and to ensure that we could accurately determine varying response to each individual warning type, we thereafter focused on the medications that generated single warnings during the study period. For each single warning we obtained patient name, account number, event date and time, hospital unit at the time of the event, ordered medication, ordering staff member, warning type, and staff member response to the warning (eg, override warning or erase order [accept the warning]). The response replace was used very infrequently, and therefore warnings that resulted in this response were excluded. Medications available in more than 1 form included the route of administration in their name, and from this they were categorized as parenteral or nonparenteral. All nonparenteral or parenteral forms of a given medication were grouped together as 1 medication (eg, morphine sustained release and morphine elixir were classified as a single‐medication, nonparenteral morphine). Medications were further categorized according to whether or not they were on the Institute for Safe Medication Practice (ISMP) List of High‐Alert Medications.[30]
The study was approved by the Johns Hopkins Institutional Review Board.
Analysis
We collected descriptive data about patients and providers. Age and length of stay (LOS) at the time of the event were determined based on the patients' admit date and date of birth, and grouped into quartiles. Hospital units were grouped according to which service or services they primarily served. Medications were grouped into quartiles according to the total number of warnings they generated during the study period. Warnings were dichotomously categorized according to whether they were overridden or accepted. Unpaired t tests were used to compare continuous variables for the 2 groups, and [2] tests were used to compare categorical variables. A multivariate logistic regression was then performed, using variables with a P value of <0.10 in the univariate analysis, to control for confounders and identify independent predictors of medication warning acceptance. All analyses were performed using Intercooled Stata 12 (StataCorp, College Station, TX).
RESULTS
A total of 259,656 medication orders were placed for adult non‐ICU patients during the 7‐month study period. Of those orders, 45,835 generated some number of medication warnings.[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] The median number of warnings per patient was 4 (interquartile range [IQR]=28; mean=5.9, standard deviation [SD]=6.2), with a range from 1 to 84. The median number of warnings generated per provider during the study period was 36 (IQR=6106, mean=87.4, SD=133.7), with a range of 1 to 1096.
There were 40,391 orders placed for 454 medications for adult non‐ICU patients, which generated a single‐medication warning (excluding those with the response replace, which was used 20 times) during the 7‐month study period. Data regarding the patients and providers associated with the orders generating single warnings are shown in Table 1. Most patients were on medicine units, and most orders were entered by residents. Patients' LOS at the time the orders were placed ranged from 0 to 118 days (median=1, IQR=04; mean=4.0, SD=7.2). The median number of single warnings per patient was 4 (IQR=28; mean=6.1, SD=6.5), with a range from 1 to 84. The median number of single warnings generated per provider during the study period was 15 (IQR=373; mean=61.7, SD=109.6), with a range of 1 to 1057.
No. (%) | |
---|---|
| |
Patients (N=6,646) | |
Age | |
1545 years | 2,048 (31%) |
4657 years | 1,610 (24%) |
5872 years | 1,520 (23%) |
73104 years | 1,468 (22%) |
Gender | |
Male | 2,934 (44%) |
Hospital unita | |
Medicine | 2,992 (45%) |
Surgery | 1,836 (28%) |
Neuro/psych/chem dep | 1,337 (20%) |
OB/GYN | 481 (7%) |
Caregivers (N=655) | |
Resident | 248 (38%)b |
Nurse | 154 (24%) |
Attending or other | 97 (15%) |
NP/PA | 69 (11%) |
IM hospitalist | 31 (5%) |
Fellow | 27 (4%) |
Medical student | 23 (4%) |
Pharmacist | 6 (1%) |
Patient and caregiver characteristics for the medication orders that generated single warnings are shown in Table 2. The majority of medications were nonparenteral and not on the ISMP list (Table 3). Most warnings generated were either duplicate (47%) or interaction warnings (47%). Warnings of a particular type were repeated 14.5% of the time for a particular medication and patient (from 2 to 24 times, median=2, IQR=22, mean=2.7, SD=1.4), and 9.8% of the time for a particular caregiver, medication, and patient (from 2 to 18 times, median=2, IQR=22, mean=2.4, SD=1.1).
Variable | No. of Warnings (%)a | No. of Warnings Accepted (%)a | P |
---|---|---|---|
| |||
Patient age | |||
1545 years | 10,881 (27) | 602 (5.5%) | <0.001 |
4657 years | 9,733 (24) | 382 (3.9%) | |
5872 years | 10,000 (25) | 308 (3.1%) | |
73104 years | 9,777 (24) | 262 (2.7%) | |
Patient gender | |||
Female | 23,395 (58) | 866 (3.7%) | 0.074 |
Male | 16,996 (42) | 688 (4.1%) | |
Patient length of stay | |||
<1 day | 10,721 (27) | 660 (6.2%) | <0.001 |
1 day | 10,854 (27) | 385 (3.5%) | |
24 days | 10,424 (26) | 277 (2.7%) | |
5118 days | 8,392 (21) | 232 (2.8%) | |
Patient hospital unit | |||
Medicine | 20,057 (50) | 519 (2.6%) | <0.001 |
Surgery | 10,274 (25) | 477 (4.6%) | |
Neuro/psych/chem dep | 8,279 (21) | 417 (5.0%) | |
OB/GYN | 1,781 (4) | 141 (7.9%) | |
Ordering caregiver | |||
Resident | 22,523 (56) | 700 (3.1%) | <0.001 |
NP/PA | 7,534 (19) | 369 (4.9%) | |
IM hospitalist | 5,048 (13) | 155 (3.1%) | |
Attending | 3225 (8) | 219 (6.8%) | |
Fellow | 910 (2) | 34 (3.7%) | |
Nurse | 865 (2) | 58 (6.7%) | |
Medical student | 265 (<1) | 17 (6.4%) | |
Pharmacist | 21 (<1) | 2 (9.5%) | |
Day ordered | |||
Weekday | 31,499 (78%) | 1276 (4.1%) | <0.001 |
Weekend | 8,892 (22%) | 278 (3.1%) | |
Time ordered | |||
00000559 | 4,231 (11%) | 117 (2.8%) | <0.001 |
06001159 | 11,696 (29%) | 348 (3.0%) | |
12001759 | 15,879 (39%) | 722 (4.6%) | |
18002359 | 8,585 (21%) | 367 (4.3%) | |
Administration route (no. of meds) | |||
Nonparenteral (339) | 27,086 (67%) | 956 (3.5%) | <0.001 |
Parenteral (115) | 13,305 (33%) | 598 (4.5%) | |
ISMP List of High‐Alert Medications status (no. of meds)[30] | |||
Not on ISMP list (394) | 27,503 (68%) | 1251 (4.5%) | <0.001 |
On ISMP list (60) | 12,888 (32%) | 303 (2.4%) | |
No. of warnings per med (no. of meds) | |||
11062133 (7) | 9,869 (24%) | 191 (1.9%) | <0.001 |
4681034 (13) | 10,014 (25%) | 331 (3.3%) | |
170444 (40) | 10,182 (25%) | 314 (3.1%) | |
1169 (394) | 10,326 (26%) | 718 (7.0%) | |
Warning type (no. of meds)b | |||
Duplicate (369) | 19,083 (47%) | 1041 (5.5%) | <0.001 |
Interaction (315) | 18,894 (47%) | 254 (1.3%) | |
Allergy (138) | 2,371 (6%) | 243 (10.0%) | |
Adverse reaction (14) | 43 (0.1%) | 16 (37%) |
Variable | Adjusted OR | 95% CI |
---|---|---|
| ||
Patient age | ||
1545 years | 1.00 | Reference |
4657 years | 0.89 | 0.771.02 |
5872 years | 0.85 | 0.730.99 |
73104 years | 0.91 | 0.771.08 |
Patient gender | ||
Female | 1.00 | Reference |
Male | 1.26 | 1.131.41 |
Patient length of stay | ||
<1 day | 1.00 | Reference |
1 day | 0.65 | 0.550.76 |
24 days | 0.49 | 0.420.58 |
5118 days | 0.49 | 0.410.58 |
Patient hospital unit | ||
Medicine | 1.00 | Reference |
Surgery | 1.45 | 1.251.68 |
Neuro/psych/chem dep | 1.35 | 1.151.58 |
OB/GYN | 2.43 | 1.923.08 |
Ordering caregiver | ||
Resident | 1.00 | Reference |
NP/PA | 1.63 | 1.421.88 |
IM hospitalist | 1.24 | 1.021.50 |
Attending | 1.83 | 1.542.18 |
Fellow | 1.41 | 0.982.03 |
Nurse | 1.92 | 1.442.57 |
Medical student | 1.17 | 0.701.95 |
Pharmacist | 3.08 | 0.6714.03 |
Medication factors | ||
Nonparenteral | 1.00 | Reference |
Parenteral | 1.79 | 1.592.03 |
HighAlert Medication status (no. of meds)[30] | ||
Not on ISMP list | 1.00 | Reference |
On ISMP list | 0.37 | 0.320.43 |
No. of warnings per medication | ||
11062133 | 1.00 | Reference |
4681034 | 2.30 | 1.902.79 |
170444 | 2.25 | 1.852.73 |
1169 | 4.10 | 3.424.92 |
Warning type | ||
Duplicate | 1.00 | Reference |
Interaction | 0.24 | 0.210.28 |
Allergy | 2.28 | 1.942.68 |
Adverse reaction | 9.24 | 4.5218.90 |
One thousand five hundred fifty‐four warnings were erased (ie, accepted by clinicians [4%]). In univariate analysis, only patient gender was not associated with warning acceptance. Patient age, LOS, hospital unit at the time of order entry, ordering caregiver type, day and time the medication was ordered, administration route, presence on the ISMP list, warning frequency, and warning type were all significantly associated with warning acceptance (Table 2).
Older patient age, longer LOS, presence of the medication on the ISMP list, and interaction warning type were all negatively associated with warning acceptance in multivariable analysis. Warning acceptance was positively associated with male patient gender, being on a service other than medicine, being a caregiver other than a resident, parenteral medications, lower warning frequency, and allergy or adverse reaction warning types (Table 3).
The 20 medications that generated the most single warnings are shown in Table 4. Medications on the ISMP list accounted for 8 of these top 20 medications. For most of them, duplicate and interaction warnings accounted for most of the warnings generated, except for parenteral hydromorphone, oral oxycodone, parenteral morphine, and oral hydromorphone, which each had more allergy than interaction warnings.
Medication | ISMP Listb | No. of Warnings | Duplicate, No. (%)c | Interaction, No. (%)c | Allergy, No. (%)c | Adverse Reaction, No. (%)c |
---|---|---|---|---|---|---|
| ||||||
Hydromorphone injectable | Yes | 2,133 | 1,584 (74.3) | 127 (6.0) | 422 (19.8) | |
Metoprolol | 1,432 | 550 (38.4) | 870 (60.8) | 12 (0.8) | ||
Aspirin | 1,375 | 212 (15.4) | 1,096 (79.7) | 67 (4.9) | ||
Oxycodone | Yes | 1,360 | 987 (72.6) | 364 (26.8) | 9 (0.7) | |
Potassium chloride | 1,296 | 379 (29.2) | 917 (70.8) | |||
Ondansetron injectable | 1,167 | 1,013 (86.8) | 153 (13.1) | 1 (0.1) | ||
Aspart insulin injectable | Yes | 1,106 | 643 (58.1) | 463 (41.9) | ||
Warfarin | Yes | 1,034 | 298 (28.8) | 736 (71.2) | ||
Heparin injectable | Yes | 1,030 | 205 (19.9) | 816 (79.2) | 9 (0.3) | |
Furosemide injectable | 980 | 438 (45.0) | 542 (55.3) | |||
Lisinopril | 926 | 225 (24.3) | 698 (75.4) | 3 (0.3) | ||
Acetaminophen | 860 | 686 (79.8) | 118 (13.7) | 54 (6.3) | 2 (0.2) | |
Morphine injectable | Yes | 804 | 467 (58.1) | 100 (12.4) | 233 (29.0) | 4 (0.5) |
Diazepam | 786 | 731 (93.0) | 41 (5.2) | 14 (1.8) | ||
Glargine insulin injectable | Yes | 746 | 268 (35.9) | 478 (64.1) | ||
Ibuprofen | 713 | 125 (17.5) | 529 (74.2) | 54 (7.6) | 5 (0.7) | |
Hydromorphone | Yes | 594 | 372 (62.6) | 31 (5.2) | 187 (31.5) | 4 (0.7) |
Furosemide | 586 | 273 (46.6) | 312 (53.2) | 1 (0.2) | ||
Ketorolac injectable | 487 | 39 (8.0) | 423 (86.9) | 23 (4.7) | 2 (0.4) | |
Prednisone | 468 | 166 (35.5) | 297 (63.5) | 5 (1.1) |
DISCUSSION
Medication warnings in our study were frequently overridden, particularly when encountered by residents, for patients with a long LOS and on the internal medicine service, and for medications generating the most warnings and on the ISMP list. Disturbingly, this means that potentially important warnings for medications with the highest potential for causing harm, for possibly the sickest and most complex patients, were those that were most often ignored by young physicians in training who should have had the most to gain from them. Of course, this is not entirely surprising. Despite our hope that a culture of safety would influence young physicians' actions when caring for these patients and prescribing these medications, these patients and medications are those for whom the most warnings are generated, and these physicians are the ones entering the most orders. Only 13% of the medications studied were on the ISMP list, but they generated 32% of the warnings. We controlled for number of warnings and ISMP list status, but not for warning validity. Most likely, high‐risk medications have been set up with more warnings, many of them of lower quality, in an errant but well‐intentioned effort to make them safer. If developers of CPOE systems want to gain serious traction in using decision support to promote prescribing safe medications, they must take substantial action to increase attention to important warnings and decrease the number of clinically insignificant, low‐value warnings encountered by active caregivers on a daily basis.
Only 2 prior studies, both by Seidling et al., have specifically looked at provider response to warnings for high risk medications. Interaction warnings were rarely accepted in 1,[18] as in our study; however, in contrast to our findings, warning acceptance in both studies was higher for drugs with dose‐dependent toxicity.[18, 26] The effect of physician experience on warning acceptance has been addressed in 2 prior studies. In Weingart et al., residents were more likely than staff physicians to erase medication orders when presented with allergy and interaction warnings in a primary care setting.[20] Long et al. found that physicians younger than 40 years were less likely than older physicians to accept duplicate warnings, but those who had been at the study hospital for a longer period of time were more likely to accept them.[23] The influence of patient LOS and service on warning acceptance has not previously been described. Further study is needed looking at each of these factors.
Individual hospitals tend to avoid making modifications to order entry warning systems, because monitoring and maintaining these changes is labor intensive. Some institutions may make the decision to turn off certain categories of alerts, such as intermediate interaction warnings, to minimize the noise their providers encounter. There are even tools for disabling individual alerts or groups of alerts, such as that available for purchase from our interaction database vendor.[31] However, institutions may fear litigation should an adverse event be attributed to a disabled warning.[15, 16] Clearly, a comprehensive, health system‐wide approach is warranted.[13, 15] To date, published efforts describing ways to improve the effectiveness of medication warning systems have focused on either heightening the clinical significance of alerts[14, 21, 22, 32, 33, 34, 35, 36] or altering their presentation and how providers experience them.[21, 36, 37, 38, 39, 40, 41, 42, 43] The single medication warnings our providers receive are all presented in an identical font, and presumably response to each would be different if they were better distinguished from each other. We also found that a small but significant number of warnings were repeated for a given patient and even a given provider. If the providers knew they would only be presented with warnings the first time they occurred for a given patient and medication, they might be more attuned to the remaining warnings. Previous studies describe context‐specific decision support for medication ordering[44, 45, 46]; however, only 1 has described the use of patient context factors to modify when or how warnings are presented to providers.[47] None have described tailoring allergy, duplicate, and interaction warnings according to medication or provider types. If further study confirms our findings, modulating basic warning systems according to severity of illness, provider experience, and medication risk could powerfully increase their effectiveness. Of course, this would be extremely challenging to achieve, and is likely outside the capabilities of most, if not all, CPOE systems, at least for now.
Our study has some limitations. First, it was limited to medications that generated a single warning. We did this for ease of analysis and so that we could ensure understanding of provider response to each warning type without bias from simultaneously occurring warnings; however, caregiver response to multiple warnings appearing simultaneously for a particular medication order might be quite different. Second, we did not include any assessment of the number of medications ordered by each provider type or for each patient, either of which could significantly affect provider response to warnings. Third, as previously noted, we did not include any assessment of the validity of the warnings, beyond the 4 main categories described, which could also significantly affect provider response. However, it should be noted that although the validity of interaction warnings varies significantly from 1 medication to another, the validity of duplicate, allergy, and adverse reaction warnings in the described system are essentially the same for all medications. Fourth, it is possible that providers did modify or even erase their orders even after selecting override in response to the warning; it is also possible that providers reentered the same order after choosing erase. Unfortunately auditing for actions such as these would be extremely laborious. Finally, the study was conducted at a single medical center using a single order‐entry system. The system in use at our medical center is in use at one‐third of the 6000 hospitals in the United States, though certainly not all are using our version. Even if a hospital was using the same CPOE version and interaction database as our institution, variations in patient population and local decisions modifying how the database interacts with the warning presentation system might affect reproducibility at that institution.
Commonly encountered medication warnings are overridden at extremely high rates, and in our study this was particularly so for medications on the ISMP list, when ordered by physicians in training. Warnings of little clinical significance must be identified and eliminated, the most important warnings need to be visually distinct to increase user attention, and further research should be done into the patient, provider, setting, and medication factors that affect user responses to warnings, so that they may be customized accordingly and their significance increased. Doing so will enable us to reap the maximum possible potential from our CPOE systems, and increase the CPOE's power to protect our most vulnerable patients from our most dangerous medications, particularly when cared for by our most inexperienced physicians.
Acknowledgements
The authors thank, in particular, Scott Carey, Research Informatics Manager, for assistance with data collection. Additional thanks go to Olga Sherman and Kathleen Ancinich for assistance with data collection and management.
Disclosures: This research was supported in part by the Johns Hopkins Institute for Clinical and Translational Research. All listed authors contributed substantially to the study conception and design, analysis and interpretation of data, drafting the article or revising it critically for important intellectual content, and final approval of the version to be published. No one who fulfills these criteria has been excluded from authorship. This research received no specific grant from any funding agency in the public, commercial, or not‐for‐profit sectors. The authors have no competing interests to declare.
- Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280:1311–1316. , , , et al.,
- Effects of computerized provider order entry on prescribing practices. Arch Intern Med. 2000;160:2741–2747. , , , , , .
- Effects of computerized clinician decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293:1223–1238. , , , et al.
- The effect of computerized physician order entry with clinical decision support on the rates of adverse drug events: a systematic review. J Gen Intern Med. 2008;23:451–458. , , , et al.
- The impact of computerized physician medication order entry in hospitalized patients—a systematic review. Int J Med Inform. 2008;77:365–376. , , .
- What evidence supports the use of computerized alerts and prompts to improve clinicians' prescribing behavior? J Am Med Inform Assoc. 2009;16:531–538. , , , et al.
- Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc. 2009;16:613–623. , , , , .
- Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13:138–147. , , , .
- Evaluating clinical decision support systems: monitoring CPOE order check override rates in the Department of Veterans Affairs' Computerized Patient Record System. J Am Med Inform Assoc. 2008;15:620–626. , , , , , .
- GPs' views on computerized drug interaction alerts: questionnaire survey. J Clin Pharm Ther. 2002;27:377–382. , , .
- Clinicians' assessments of electronic medication safety alerts in ambulatory care. Arch Intern Med. 2009;169:1627–1632. , , , et al.
- A mixed method study of the merits of e‐prescribing drug alerts in primary care. J Gen Intern Med. 2008;23:442–446. , , , , .
- CPOE and clinical decision support in hospitals: getting the benefits: comment on “Unintended effects of a computerized physician order entry nearly hard‐stop alert to prevent a drug interaction.” Arch Intern Med. 2010;170:1583–1584. .
- Critical drug‐drug interactions for use in electronic health records systems with computerized physician order entry: review of leading approaches. J Patient Saf. 2011;7:61–65. , , .
- Clinical decision support systems could be modified to reduce 'alert fatigue' while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30:2310–2317. , , , , .
- Critical issues associated with drug‐drug interactions: highlights of a multistakeholder conference. Am J Health Syst Pharm. 2011;68:941–946. , , , .
- Development of a context model to prioritize drug safety alerts in CPOE systems. BMC Med Inform Decis Mak. 2011;11:35. , , , , , .
- Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc. 2011;18:479–484. , , , et al.
- How to improve the delivery of medication alerts within computerized physician order entry systems: an international Delphi study. J Am Med Inform Assoc. 2011;18:760–766. , , , .
- Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163:2625–2631. , , , , , .
- Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006;13:5–11. , , , et al.
- Optimizing the acceptance of medication‐based alerts by physicians during CPOE implementation in a community hospital environment. AMIA Annu Symp Proc. 2007:701–705. , , , .
- The use of a CPOE log for the analysis of physicians' behavior when responding to drug‐duplication reminders. Int J Med Inform. 2008;77:499–506. , , , .
- Overrides of medication alerts in ambulatory care. Arch Intern Med. 2009;169:305–311. , , , et al.
- Drug safety alert generation and overriding in a large Dutch university medical centre. Pharmacoepidemiol Drug Saf. 2009;18:941–947. , , , , , .
- Patient‐specific electronic decision support reduces prescription of excessive doses. Qual Saf Health Care. 2010;19:e15. , , , et al.
- Survival from in‐hospital cardiac arrest during nights and weekends. JAMA. 2008;299:785–792. , , , et al.
- Managing medications in clinically complex elders: “There's got to be a happy medium.” JAMA. 2010;304:1592–1601. , .
- Agency for Healthcare Research and Quality. Safety culture. Available at: http://psnet.ahrq.gov/primer.aspx?primerID=5. Accessed October 29, 2013.
- Institute for Safe Medication Practice. List of High‐Alert Medications. Available at: http://www.ismp.org/Tools/highalertmedications.pdf. Accessed June 18, 2013.
- First Databank. FDB AlertSpace. Available at: http://www.fdbhealth.com/solutions/fdb‐alertspace. Accessed July 3, 2014.
- Improving allergy alerting in a computerized physician order entry system. Proc AMIA Symp. 2000:2–6. , , , et al.
- A clinical data warehouse‐based process for refining medication orders alerts. J Am Med Inform Assoc. 2012;19:782–785. , , , , , .
- Drug‐drug interactions that should be non‐interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc. 2013;20:489–493. , , , et al.
- High‐priority drug‐drug interactions for use in electronic health records. J Am Med Inform Assoc. 2012;19:735–743. , , , et al.
- Design of decision support interventions for medication prescribing. Int J Med Inform. 2013;82:492–503. , , , , .
- A randomized trial of the effectiveness of on‐demand versus computer‐triggered drug decision support in primary care. J Am Med Inform Assoc. 2008;15:430–438. , , , et al.
- Tiering drug‐drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc. 2009;16:40–46. , , , et al.
- A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc. 2010;17:493–501. , , , et al.
- Unintended effects of a computerized physician order entry nearly hard‐stop alert to prevent a drug interaction: a randomized controlled trial. Arch Intern Med. 2010;170:1578–1583. , , , et al.
- Randomized clinical trial of a customized electronic alert requiring an affirmative response compared to a control group receiving a commercial passive CPOE alert: NSAID—warfarin co‐prescribing as a test case. J Am Med Inform Assoc. 2010;17:411–415. , , , , , .
- Making electronic prescribing alerts more effective: scenario‐based experimental study in junior doctors. J Am Med Inform Assoc. 2011;18:789–798. , , , , .
- Development and preliminary evidence for the validity of an instrument assessing implementation of human‐factors principles in medication‐related decision‐support systems—I‐MeDeSA. J Am Med Inform Assoc. 2011;18(suppl 1):i62–i72. , , , et al.
- Medication‐related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14:29–40. , , , et al.
- Physicians' perceptions on the usefulness of contextual information for prioritizing and presenting alerts in Computerized Physician Order Entry systems. BMC Med Inform Decis Mak. 2012;12:111. , , , et al.
- Computerized clinical decision support systems for drug prescribing and management: a decision‐maker‐researcher partnership systematic review. Implement Sci. 2011;6:89. , , , et al.
- A successful model and visual design for creating context‐aware drug‐drug interaction alerts. AMIA Annu Symp Proc. 2011;2011:339–348. , .
- Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280:1311–1316. , , , et al.,
- Effects of computerized provider order entry on prescribing practices. Arch Intern Med. 2000;160:2741–2747. , , , , , .
- Effects of computerized clinician decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293:1223–1238. , , , et al.
- The effect of computerized physician order entry with clinical decision support on the rates of adverse drug events: a systematic review. J Gen Intern Med. 2008;23:451–458. , , , et al.
- The impact of computerized physician medication order entry in hospitalized patients—a systematic review. Int J Med Inform. 2008;77:365–376. , , .
- What evidence supports the use of computerized alerts and prompts to improve clinicians' prescribing behavior? J Am Med Inform Assoc. 2009;16:531–538. , , , et al.
- Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc. 2009;16:613–623. , , , , .
- Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13:138–147. , , , .
- Evaluating clinical decision support systems: monitoring CPOE order check override rates in the Department of Veterans Affairs' Computerized Patient Record System. J Am Med Inform Assoc. 2008;15:620–626. , , , , , .
- GPs' views on computerized drug interaction alerts: questionnaire survey. J Clin Pharm Ther. 2002;27:377–382. , , .
- Clinicians' assessments of electronic medication safety alerts in ambulatory care. Arch Intern Med. 2009;169:1627–1632. , , , et al.
- A mixed method study of the merits of e‐prescribing drug alerts in primary care. J Gen Intern Med. 2008;23:442–446. , , , , .
- CPOE and clinical decision support in hospitals: getting the benefits: comment on “Unintended effects of a computerized physician order entry nearly hard‐stop alert to prevent a drug interaction.” Arch Intern Med. 2010;170:1583–1584. .
- Critical drug‐drug interactions for use in electronic health records systems with computerized physician order entry: review of leading approaches. J Patient Saf. 2011;7:61–65. , , .
- Clinical decision support systems could be modified to reduce 'alert fatigue' while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30:2310–2317. , , , , .
- Critical issues associated with drug‐drug interactions: highlights of a multistakeholder conference. Am J Health Syst Pharm. 2011;68:941–946. , , , .
- Development of a context model to prioritize drug safety alerts in CPOE systems. BMC Med Inform Decis Mak. 2011;11:35. , , , , , .
- Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc. 2011;18:479–484. , , , et al.
- How to improve the delivery of medication alerts within computerized physician order entry systems: an international Delphi study. J Am Med Inform Assoc. 2011;18:760–766. , , , .
- Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163:2625–2631. , , , , , .
- Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006;13:5–11. , , , et al.
- Optimizing the acceptance of medication‐based alerts by physicians during CPOE implementation in a community hospital environment. AMIA Annu Symp Proc. 2007:701–705. , , , .
- The use of a CPOE log for the analysis of physicians' behavior when responding to drug‐duplication reminders. Int J Med Inform. 2008;77:499–506. , , , .
- Overrides of medication alerts in ambulatory care. Arch Intern Med. 2009;169:305–311. , , , et al.
- Drug safety alert generation and overriding in a large Dutch university medical centre. Pharmacoepidemiol Drug Saf. 2009;18:941–947. , , , , , .
- Patient‐specific electronic decision support reduces prescription of excessive doses. Qual Saf Health Care. 2010;19:e15. , , , et al.
- Survival from in‐hospital cardiac arrest during nights and weekends. JAMA. 2008;299:785–792. , , , et al.
- Managing medications in clinically complex elders: “There's got to be a happy medium.” JAMA. 2010;304:1592–1601. , .
- Agency for Healthcare Research and Quality. Safety culture. Available at: http://psnet.ahrq.gov/primer.aspx?primerID=5. Accessed October 29, 2013.
- Institute for Safe Medication Practice. List of High‐Alert Medications. Available at: http://www.ismp.org/Tools/highalertmedications.pdf. Accessed June 18, 2013.
- First Databank. FDB AlertSpace. Available at: http://www.fdbhealth.com/solutions/fdb‐alertspace. Accessed July 3, 2014.
- Improving allergy alerting in a computerized physician order entry system. Proc AMIA Symp. 2000:2–6. , , , et al.
- A clinical data warehouse‐based process for refining medication orders alerts. J Am Med Inform Assoc. 2012;19:782–785. , , , , , .
- Drug‐drug interactions that should be non‐interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc. 2013;20:489–493. , , , et al.
- High‐priority drug‐drug interactions for use in electronic health records. J Am Med Inform Assoc. 2012;19:735–743. , , , et al.
- Design of decision support interventions for medication prescribing. Int J Med Inform. 2013;82:492–503. , , , , .
- A randomized trial of the effectiveness of on‐demand versus computer‐triggered drug decision support in primary care. J Am Med Inform Assoc. 2008;15:430–438. , , , et al.
- Tiering drug‐drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc. 2009;16:40–46. , , , et al.
- A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc. 2010;17:493–501. , , , et al.
- Unintended effects of a computerized physician order entry nearly hard‐stop alert to prevent a drug interaction: a randomized controlled trial. Arch Intern Med. 2010;170:1578–1583. , , , et al.
- Randomized clinical trial of a customized electronic alert requiring an affirmative response compared to a control group receiving a commercial passive CPOE alert: NSAID—warfarin co‐prescribing as a test case. J Am Med Inform Assoc. 2010;17:411–415. , , , , , .
- Making electronic prescribing alerts more effective: scenario‐based experimental study in junior doctors. J Am Med Inform Assoc. 2011;18:789–798. , , , , .
- Development and preliminary evidence for the validity of an instrument assessing implementation of human‐factors principles in medication‐related decision‐support systems—I‐MeDeSA. J Am Med Inform Assoc. 2011;18(suppl 1):i62–i72. , , , et al.
- Medication‐related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14:29–40. , , , et al.
- Physicians' perceptions on the usefulness of contextual information for prioritizing and presenting alerts in Computerized Physician Order Entry systems. BMC Med Inform Decis Mak. 2012;12:111. , , , et al.
- Computerized clinical decision support systems for drug prescribing and management: a decision‐maker‐researcher partnership systematic review. Implement Sci. 2011;6:89. , , , et al.
- A successful model and visual design for creating context‐aware drug‐drug interaction alerts. AMIA Annu Symp Proc. 2011;2011:339–348. , .
© 2015 Society of Hospital Medicine
Reducing Inappropriate Acid Suppressives
Prior studies have found that up to 70% of acid‐suppressive medication (ASM) use in the hospital is not indicated, most commonly for stress ulcer prophylaxis in patients outside of the intensive care unit (ICU).[1, 2, 3, 4, 5, 6, 7] Accordingly, reducing inappropriate use of ASM for stress ulcer prophylaxis in hospitalized patients is 1 of the 5 opportunities for improved healthcare value identified by the Society of Hospital Medicine as part of the American Board of Internal Medicine's Choosing Wisely campaign.[8]
We designed and tested a computerized clinical decision support (CDS) intervention with the goal of reducing use of ASM for stress ulcer prophylaxis in hospitalized patients outside the ICU at an academic medical center.
METHODS
Study Design
We conducted a quasiexperimental study using an interrupted time series to analyze data collected prospectively during clinical care before and after implementation of our intervention. The study was deemed a quality improvement initiative by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations/Institutional Review Board.
Patients and Setting
All admissions >18 years of age to a 649‐bed academic medical center in Boston, Massachusetts from September 12, 2011 through July 3, 2012 were included. The medical center consists of an East and West Campus, located across the street from each other. Care for both critically ill and noncritically ill medical and surgical patients occurs on both campuses. Differences include greater proportions of patients with gastrointestinal and oncologic conditions on the East Campus, and renal and cardiac conditions on the West Campus. Additionally, labor and delivery occurs exclusively on the East Campus, and the density of ICU beds is greater on the West Campus. Both campuses utilize a computer‐based provider order entry (POE) system.
Intervention
Our study was implemented in 2 phases (Figure 1).

Baseline Phase
The purpose of the first phase was to obtain baseline data on ASM use prior to implementing our CDS tool designed to influence prescribing. During this baseline phase, a computerized prompt was activated through our POE system whenever a clinician initiated an order for ASM (histamine 2 receptor antagonists or proton pump inhibitors), asking the clinician to select the reason/reasons for the order based on the following predefined response options: (1) active/recent upper gastrointestinal bleed, (2) continuing preadmission medication, (3) Helicobacter pylori treatment, (4) prophylaxis in patient on medications that increase bleeding risk, (5) stress ulcer prophylaxis, (6) suspected/known peptic ulcer disease, gastritis, esophagitis, gastroesophageal reflux disease, and (7) other, with a free‐text box to input the indication. This indications prompt was rolled out to the entire medical center on September 12, 2011 and remained active for the duration of the study period.
Intervention Phase
In the second phase of the study, if a clinician selected stress ulcer prophylaxis as the only indication for ordering ASM, a CDS prompt alerted the clinician that Stress ulcer prophylaxis is not recommended for patients outside of the intensive care unit (ASHP Therapeutic Guidelines on Stress Ulcer Prophylaxis. Am J Health‐Syst Pharm. 1999, 56:347‐79). The clinician could then select either, For use in ICUOrder Medication, Choose Other Indication, or Cancel Order. This CDS prompt was rolled out in a staggered manner to the East Campus on January 3, 2012, followed by the West Campus on April 3, 2012.
Outcomes
The primary outcome was the rate of ASM use with stress ulcer prophylaxis selected as the only indication in a patient located outside of the ICU. We confirmed patient location in the 24 hours after the order was placed. Secondary outcomes were rates of overall ASM use, defined via pharmacy charges, and rates of use on discharge.
Statistical Analysis
To assure stable measurement of trends, we studied at least 3 months before and after the intervention on each campus. We used the Fisher exact test to compare the rates of our primary and secondary outcomes before and after the intervention, stratified by campus. For our primary outcomeat least 1 ASM order with stress ulcer prophylaxis selected as the only indication during hospitalizationwe developed a logistic regression model with a generalized estimating equation and exchangeable working correlation structure to control for admission characteristics (Table 1) and repeated admissions. Using a term for the interaction between time and the intervention, this model allowed us to assess changes in level and trend for the odds of a patient receiving at least 1 ASM order with stress ulcer prophylaxis as the only indication before, compared to after the intervention, stratified by campus. We used a 2‐sided type I error of <0.05 to indicate statistical significance.
Study Phase | Campus | |||
---|---|---|---|---|
East | West | |||
Baseline, n=3,747 | Intervention, n=6,191 | Baseline, n=11,177 | Intervention, n=5,285 | |
| ||||
Age, y, mean (SD) | 48.1 (18.5) | 47.7 (18.2) | 61.0 (18.0) | 60.3 (18.1) |
Gender, no. (%) | ||||
Female | 2744 (73.2%) | 4542 (73.4%) | 5551 (49.7%) | 2653 (50.2%) |
Male | 1003 (26.8%) | 1649 (26.6%) | 5626 (50.3%) | 2632 (49.8%) |
Race, no. (%) | ||||
Asian | 281 (7.5%) | 516 (8.3%) | 302 (2.7%) | 156 (3%) |
Black | 424 (11.3%) | 667 (10.8%) | 1426 (12.8%) | 685 (13%) |
Hispanic | 224 (6%) | 380 (6.1%) | 619 (5.5%) | 282 (5.3%) |
Other | 378 (10.1%) | 738 (11.9%) | 776 (6.9%) | 396 (7.5%) |
White | 2440 (65.1%) | 3890 (62.8%) | 8054 (72%) | 3766 (71.3%) |
Charlson score, mean (SD) | 0.8 (1.1) | 0.7 (1.1) | 1.5 (1.4) | 1.4 (1.4) |
Gastrointestinal bleeding, no. (%)* | 49 (1.3%) | 99 (1.6%) | 385 (3.4%) | 149 (2.8%) |
Other medication exposures, no. (%) | ||||
Therapeutic anticoagulant | 218 (5.8%) | 409 (6.6%) | 2242 (20.1%) | 1022 (19.3%) |
Prophylactic anticoagulant | 1081 (28.8%) | 1682 (27.2%) | 5999 (53.7%) | 2892 (54.7%) |
NSAID | 1899 (50.7%) | 3141 (50.7%) | 1248 (11.2%) | 575 (10.9%) |
Antiplatelet | 313 (8.4%) | 585 (9.4%) | 4543 (40.6%) | 2071 (39.2%) |
Admitting department, no. (%) | ||||
Surgery | 2507 (66.9%) | 4146 (67%) | 3255 (29.1%) | 1578 (29.9%) |
Nonsurgery | 1240 (33.1%) | 2045 (33%) | 7922 (70.9%) | 3707 (70.1%) |
Any ICU Stay, no. (%) | 217 (5.8%) | 383 (6.2%) | 2786 (24.9%) | 1252 (23.7%) |
RESULTS
There were 26,400 adult admissions during the study period, and 22,330 discrete orders for ASM. Overall, 12,056 (46%) admissions had at least 1 charge for ASM. Admission characteristics were similar before and after the intervention on each campus (Table 1).
Table 2 shows the indications chosen each time ASM was ordered, stratified by campus and study phase. Although selection of stress ulcer prophylaxis decreased on both campuses during the intervention phase, selection of continuing preadmission medication increased.
Study Phase | Campus | |||
---|---|---|---|---|
East | West | |||
Baseline, n=2,062 | Intervention, n=3,243 | Baseline, n=12,038 | Intervention, n=4,987 | |
| ||||
Indication* | ||||
Continuing preadmission medication | 910 (44.1%) | 1695 (52.3%) | 5597 (46.5%) | 2802 (56.2%) |
PUD, gastritis, esophagitis, GERD | 440 (21.3%) | 797 (24.6%) | 1303 (10.8%) | 582 (11.7%) |
Stress ulcer prophylaxis | 298 (14.4%) | 100 (3.1%) | 2659 (22.1%) | 681 (13.7%) |
Prophylaxis in patient on medications that increase bleeding risk | 226 (11.0%) | 259 (8.0%) | 965 (8.0%) | 411 (8.2%) |
Active/recent gastrointestinal bleed | 154 (7.5%) | 321 (9.9%) | 1450 (12.0%) | 515 (10.3) |
Helicobacter pylori treatment | 6 (0.2%) | 2 (0.1%) | 43 (0.4%) | 21 (0.4%) |
Other | 111 (5.4%) | 156 (4.8%) | 384 (3.2%) | 186 (3.7%) |
Table 3 shows the unadjusted comparison of outcomes between baseline and intervention phases on each campus. Use of ASM with stress ulcer prophylaxis as the only indication decreased during the intervention phase on both campuses. There was a nonsignificant reduction in overall rates of use on both campuses, and use on discharge was unchanged. Figure 2 demonstrates the unadjusted and modeled monthly rates of admissions with at least 1 ASM order with stress ulcer prophylaxis selected as the only indication, stratified by campus. After adjusting for the admission characteristics in Table 1, during the intervention phase on both campuses there was a significant immediate reduction in the odds of receiving an ASM with stress ulcer prophylaxis selected as the only indication (East Campus odds ratio [OR]: 0.36, 95% confidence interval [CI]: 0.180.71; West Campus OR: 0.41, 95% CI: 0.280.60), and a significant change in trend compared to the baseline phase (East Campus 1.5% daily decrease in odds of receiving ASM solely for stress ulcer prophylaxis, P=0.002; West Campus 0.9% daily decrease in odds of receiving ASM solely for stress ulcer prophylaxis, P=0.02).
Study Phase | Campus | |||||
---|---|---|---|---|---|---|
East | West | |||||
Baseline, n=3,747 | Intervention, n=6,191 | P Value* | Baseline, n=11,177 | Intervention, n=5,285 | P Value* | |
| ||||||
Outcome | ||||||
Any inappropriate acid‐suppressive exposure | 4.0% | 0.6% | <0.001 | 7.7% | 2.2% | <0.001 |
Any acid‐suppressive exposure | 33.1% | 31.8% | 0.16 | 54.5% | 52.9% | 0.05 |
Discharged on acid‐suppressive medication | 18.9% | 19.6% | 0.40 | 34.7% | 34.7% | 0.95 |

DISCUSSION
In this single‐center study, we found that a computerized CDS intervention resulted in a significant reduction in use of ASM for the sole purpose of stress ulcer prophylaxis in patients outside the ICU, a nonsignificant reduction in overall use, and no change in use on discharge. We found low rates of use for the isolated purpose of stress ulcer prophylaxis even before the intervention, and continuing preadmission medication was the most commonly selected indication throughout the study.
Although overall rates of ASM use declined after the intervention, the change was not statistically significant, and was not of the same magnitude as the decline in rates of use for the purpose of stress ulcer prophylaxis. This suggests that our intervention, in part, led to substitution of 1 indication for another. The indication that increased the most after rollout on both campuses was continuing preadmission medication. There are at least 2 possibilities for this finding: (1) the intervention prompted physicians to more accurately record the indication, or (2) physicians falsified the indication in order to execute the order. To explore these possibilities, we reviewed the charts of a random sample of 100 admissions during each of the baseline and intervention phases where continuing preadmission medication was selected as an indication for an ASM order. We found that 6/100 orders in the baseline phase and 7/100 orders in the intervention phase incorrectly indicated that the patient was on ASM prior to admission (P=0.77). This suggests that scenario 1 above is the more likely explanation for the increased use of this indication, and that the intervention, in part, simply unmasked the true rate of use at our medical center for the isolated purpose of stress ulcer prophylaxis.
These findings have implications for others attempting to use computerized CDS to better understand physician prescribing. They suggest that information collected through computer‐based interaction with clinicians at the point of care may not always be accurate or complete. As institutions increasingly use similar interventions to drive behavior, information obtained from such interaction should be validated, and when possible, patient outcomes should be measured.
Our findings suggest that rates of ASM use for the purpose of stress ulcer prophylaxis in the hospital may have declined over the last decade. Studies demonstrating that up to 70% of inpatient use of ASM was inappropriate were conducted 5 to 10 years ago.[1, 2, 3, 4, 5] Since then, studies have demonstrated risk of nosocomial infections in patients on ASM.[9, 10, 11] It is possible that the low rate of use for stress ulcer prophylaxis in our study is attributable to awareness of the risks of these medications, and limited our ability to detect differences in overall use. It is also possible, however, that a portion of the admissions with continuation of preadmission medication as the indication were started on these medications during a prior hospitalization. Thus, some portion of preadmission use is likely to represent failed medication reconciliation during a prior discharge. In this context, hospitalization may serve as an opportunity to evaluate the indication for ASM use even when these medications show up as preadmission medications.
There are additional limitations. First, the single‐center nature limits generalizability. Second, the first phase of our study, designed to obtain baseline data on ASM use, may have led to changes in prescribing prior to implementation of our CDS tool. Additionally, we did not validate the accuracy of each of the chosen indications, or the site of initial prescription in the case of preadmission exposure. Last, our study was not powered to investigate changes in rates of nosocomial gastrointestinal bleeding or nosocomial pneumonia owing to the infrequent nature of these complications.
In conclusion, we designed a simple computerized CDS intervention that was associated with a reduction in ASM use for stress ulcer prophylaxis in patients outside the ICU, a nonsignificant reduction in overall use, and no change in use on discharge. The majority of inpatient use represented continuation of preadmission medication, suggesting that interventions to improve the appropriateness of ASM prescribing should span the continuum of care. Future studies should investigate whether it is worthwhile and appropriate to reevaluate continued use of preadmission ASM during an inpatient stay.
Acknowledgements
The authors acknowledge Joshua Guthermann, MBA, and Jane Hui Chen Lim, MBA, for their assistance in the early phases of data analysis, and Long H. Ngo, PhD, for his statistical consultation.
Disclosures: Dr. Herzig was funded by a Young Clinician Research Award from the Center for Integration of Medicine and Innovative Technology, a nonprofit consortium of Boston teaching hospitals and universities, and grant number K23AG042459 from the National Institute on Aging. Dr. Marcantonio was funded by grant number K24AG035075 from the National Institute on Aging. The funding organizations had no involvement in any aspect of the study, including design, conduct, and reporting of the study. Dr. Herzig had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs. Herzig and Marcantonio were responsible for the study concept and design. Drs. Herzig, Feinbloom, Howell, and Ms. Adra and Mr. Afonso were responsible for the acquisition of data. Drs. Herzig, Howell, Marcantonio, and Mr. Guess were responsible for the analysis and interpretation of the data. Dr. Herzig drafted the manuscript. All of the authors participated in the critical revision of the manuscript for important intellectual content. Drs. Herzig and Marcantonio were responsible for study supervision. The authors report no conflicts of interest.
- Stress ulcer prophylaxis in hospitalized patients not in intensive care units. Am J Health Syst Pharm. 2007;64(13):1396–1400. , .
- Magnitude and economic impact of inappropriate use of stress ulcer prophylaxis in non‐ICU hospitalized patients. Am J Gastroenterol. 2006;101(10):2200–2205. , .
- Stress‐ulcer prophylaxis for general medical patients: a review of the evidence. J Hosp Med. 2007;2(2):86–92. , .
- Hospital use of acid‐suppressive medications and its fall‐out on prescribing in general practice: a 1‐month survey. Aliment Pharmacol Ther. 2003;17(12):1503–1506. , , , et al.
- Inadequate use of acid‐suppressive therapy in hospitalized patients and its implications for general practice. Dig Dis Sci. 2005;50(12):2307–2311. , , , , , .
- Brief report: reducing inappropriate usage of stress ulcer prophylaxis among internal medicine residents. A practice‐based educational intervention. J Gen Intern Med. 2006;21(5):498–500. , .
- Inappropriate continuation of stress ulcer prophylactic therapy after discharge. Ann Pharmacother. 2007;41(10):1611–1616. , , .
- Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486–492. , , , et al.
- Risk of Clostridium difficile diarrhea among hospital inpatients prescribed proton pump inhibitors: cohort and case‐control studies. CMAJ. 2004;171(1):33–38. , , , , .
- Iatrogenic gastric acid suppression and the risk of nosocomial Clostridium difficile infection. Arch Intern Med. 2010;170(9):784–790. , , , et al.
- Acid‐suppressive medication use and the risk for hospital‐acquired pneumonia. JAMA. 2009;301(20):2120–2128. , , , .
- Healthcare Cost and Utilization Project. Clinical classifications software (CCS) for ICD‐9‐CM. December 2009. Agency for Healthcare Research and Quality, Rockville, MD. Available at: www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed June 18, 2014.
Prior studies have found that up to 70% of acid‐suppressive medication (ASM) use in the hospital is not indicated, most commonly for stress ulcer prophylaxis in patients outside of the intensive care unit (ICU).[1, 2, 3, 4, 5, 6, 7] Accordingly, reducing inappropriate use of ASM for stress ulcer prophylaxis in hospitalized patients is 1 of the 5 opportunities for improved healthcare value identified by the Society of Hospital Medicine as part of the American Board of Internal Medicine's Choosing Wisely campaign.[8]
We designed and tested a computerized clinical decision support (CDS) intervention with the goal of reducing use of ASM for stress ulcer prophylaxis in hospitalized patients outside the ICU at an academic medical center.
METHODS
Study Design
We conducted a quasiexperimental study using an interrupted time series to analyze data collected prospectively during clinical care before and after implementation of our intervention. The study was deemed a quality improvement initiative by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations/Institutional Review Board.
Patients and Setting
All admissions >18 years of age to a 649‐bed academic medical center in Boston, Massachusetts from September 12, 2011 through July 3, 2012 were included. The medical center consists of an East and West Campus, located across the street from each other. Care for both critically ill and noncritically ill medical and surgical patients occurs on both campuses. Differences include greater proportions of patients with gastrointestinal and oncologic conditions on the East Campus, and renal and cardiac conditions on the West Campus. Additionally, labor and delivery occurs exclusively on the East Campus, and the density of ICU beds is greater on the West Campus. Both campuses utilize a computer‐based provider order entry (POE) system.
Intervention
Our study was implemented in 2 phases (Figure 1).

Baseline Phase
The purpose of the first phase was to obtain baseline data on ASM use prior to implementing our CDS tool designed to influence prescribing. During this baseline phase, a computerized prompt was activated through our POE system whenever a clinician initiated an order for ASM (histamine 2 receptor antagonists or proton pump inhibitors), asking the clinician to select the reason/reasons for the order based on the following predefined response options: (1) active/recent upper gastrointestinal bleed, (2) continuing preadmission medication, (3) Helicobacter pylori treatment, (4) prophylaxis in patient on medications that increase bleeding risk, (5) stress ulcer prophylaxis, (6) suspected/known peptic ulcer disease, gastritis, esophagitis, gastroesophageal reflux disease, and (7) other, with a free‐text box to input the indication. This indications prompt was rolled out to the entire medical center on September 12, 2011 and remained active for the duration of the study period.
Intervention Phase
In the second phase of the study, if a clinician selected stress ulcer prophylaxis as the only indication for ordering ASM, a CDS prompt alerted the clinician that Stress ulcer prophylaxis is not recommended for patients outside of the intensive care unit (ASHP Therapeutic Guidelines on Stress Ulcer Prophylaxis. Am J Health‐Syst Pharm. 1999, 56:347‐79). The clinician could then select either, For use in ICUOrder Medication, Choose Other Indication, or Cancel Order. This CDS prompt was rolled out in a staggered manner to the East Campus on January 3, 2012, followed by the West Campus on April 3, 2012.
Outcomes
The primary outcome was the rate of ASM use with stress ulcer prophylaxis selected as the only indication in a patient located outside of the ICU. We confirmed patient location in the 24 hours after the order was placed. Secondary outcomes were rates of overall ASM use, defined via pharmacy charges, and rates of use on discharge.
Statistical Analysis
To assure stable measurement of trends, we studied at least 3 months before and after the intervention on each campus. We used the Fisher exact test to compare the rates of our primary and secondary outcomes before and after the intervention, stratified by campus. For our primary outcomeat least 1 ASM order with stress ulcer prophylaxis selected as the only indication during hospitalizationwe developed a logistic regression model with a generalized estimating equation and exchangeable working correlation structure to control for admission characteristics (Table 1) and repeated admissions. Using a term for the interaction between time and the intervention, this model allowed us to assess changes in level and trend for the odds of a patient receiving at least 1 ASM order with stress ulcer prophylaxis as the only indication before, compared to after the intervention, stratified by campus. We used a 2‐sided type I error of <0.05 to indicate statistical significance.
Study Phase | Campus | |||
---|---|---|---|---|
East | West | |||
Baseline, n=3,747 | Intervention, n=6,191 | Baseline, n=11,177 | Intervention, n=5,285 | |
| ||||
Age, y, mean (SD) | 48.1 (18.5) | 47.7 (18.2) | 61.0 (18.0) | 60.3 (18.1) |
Gender, no. (%) | ||||
Female | 2744 (73.2%) | 4542 (73.4%) | 5551 (49.7%) | 2653 (50.2%) |
Male | 1003 (26.8%) | 1649 (26.6%) | 5626 (50.3%) | 2632 (49.8%) |
Race, no. (%) | ||||
Asian | 281 (7.5%) | 516 (8.3%) | 302 (2.7%) | 156 (3%) |
Black | 424 (11.3%) | 667 (10.8%) | 1426 (12.8%) | 685 (13%) |
Hispanic | 224 (6%) | 380 (6.1%) | 619 (5.5%) | 282 (5.3%) |
Other | 378 (10.1%) | 738 (11.9%) | 776 (6.9%) | 396 (7.5%) |
White | 2440 (65.1%) | 3890 (62.8%) | 8054 (72%) | 3766 (71.3%) |
Charlson score, mean (SD) | 0.8 (1.1) | 0.7 (1.1) | 1.5 (1.4) | 1.4 (1.4) |
Gastrointestinal bleeding, no. (%)* | 49 (1.3%) | 99 (1.6%) | 385 (3.4%) | 149 (2.8%) |
Other medication exposures, no. (%) | ||||
Therapeutic anticoagulant | 218 (5.8%) | 409 (6.6%) | 2242 (20.1%) | 1022 (19.3%) |
Prophylactic anticoagulant | 1081 (28.8%) | 1682 (27.2%) | 5999 (53.7%) | 2892 (54.7%) |
NSAID | 1899 (50.7%) | 3141 (50.7%) | 1248 (11.2%) | 575 (10.9%) |
Antiplatelet | 313 (8.4%) | 585 (9.4%) | 4543 (40.6%) | 2071 (39.2%) |
Admitting department, no. (%) | ||||
Surgery | 2507 (66.9%) | 4146 (67%) | 3255 (29.1%) | 1578 (29.9%) |
Nonsurgery | 1240 (33.1%) | 2045 (33%) | 7922 (70.9%) | 3707 (70.1%) |
Any ICU Stay, no. (%) | 217 (5.8%) | 383 (6.2%) | 2786 (24.9%) | 1252 (23.7%) |
RESULTS
There were 26,400 adult admissions during the study period, and 22,330 discrete orders for ASM. Overall, 12,056 (46%) admissions had at least 1 charge for ASM. Admission characteristics were similar before and after the intervention on each campus (Table 1).
Table 2 shows the indications chosen each time ASM was ordered, stratified by campus and study phase. Although selection of stress ulcer prophylaxis decreased on both campuses during the intervention phase, selection of continuing preadmission medication increased.
Study Phase | Campus | |||
---|---|---|---|---|
East | West | |||
Baseline, n=2,062 | Intervention, n=3,243 | Baseline, n=12,038 | Intervention, n=4,987 | |
| ||||
Indication* | ||||
Continuing preadmission medication | 910 (44.1%) | 1695 (52.3%) | 5597 (46.5%) | 2802 (56.2%) |
PUD, gastritis, esophagitis, GERD | 440 (21.3%) | 797 (24.6%) | 1303 (10.8%) | 582 (11.7%) |
Stress ulcer prophylaxis | 298 (14.4%) | 100 (3.1%) | 2659 (22.1%) | 681 (13.7%) |
Prophylaxis in patient on medications that increase bleeding risk | 226 (11.0%) | 259 (8.0%) | 965 (8.0%) | 411 (8.2%) |
Active/recent gastrointestinal bleed | 154 (7.5%) | 321 (9.9%) | 1450 (12.0%) | 515 (10.3) |
Helicobacter pylori treatment | 6 (0.2%) | 2 (0.1%) | 43 (0.4%) | 21 (0.4%) |
Other | 111 (5.4%) | 156 (4.8%) | 384 (3.2%) | 186 (3.7%) |
Table 3 shows the unadjusted comparison of outcomes between baseline and intervention phases on each campus. Use of ASM with stress ulcer prophylaxis as the only indication decreased during the intervention phase on both campuses. There was a nonsignificant reduction in overall rates of use on both campuses, and use on discharge was unchanged. Figure 2 demonstrates the unadjusted and modeled monthly rates of admissions with at least 1 ASM order with stress ulcer prophylaxis selected as the only indication, stratified by campus. After adjusting for the admission characteristics in Table 1, during the intervention phase on both campuses there was a significant immediate reduction in the odds of receiving an ASM with stress ulcer prophylaxis selected as the only indication (East Campus odds ratio [OR]: 0.36, 95% confidence interval [CI]: 0.180.71; West Campus OR: 0.41, 95% CI: 0.280.60), and a significant change in trend compared to the baseline phase (East Campus 1.5% daily decrease in odds of receiving ASM solely for stress ulcer prophylaxis, P=0.002; West Campus 0.9% daily decrease in odds of receiving ASM solely for stress ulcer prophylaxis, P=0.02).
Study Phase | Campus | |||||
---|---|---|---|---|---|---|
East | West | |||||
Baseline, n=3,747 | Intervention, n=6,191 | P Value* | Baseline, n=11,177 | Intervention, n=5,285 | P Value* | |
| ||||||
Outcome | ||||||
Any inappropriate acid‐suppressive exposure | 4.0% | 0.6% | <0.001 | 7.7% | 2.2% | <0.001 |
Any acid‐suppressive exposure | 33.1% | 31.8% | 0.16 | 54.5% | 52.9% | 0.05 |
Discharged on acid‐suppressive medication | 18.9% | 19.6% | 0.40 | 34.7% | 34.7% | 0.95 |

DISCUSSION
In this single‐center study, we found that a computerized CDS intervention resulted in a significant reduction in use of ASM for the sole purpose of stress ulcer prophylaxis in patients outside the ICU, a nonsignificant reduction in overall use, and no change in use on discharge. We found low rates of use for the isolated purpose of stress ulcer prophylaxis even before the intervention, and continuing preadmission medication was the most commonly selected indication throughout the study.
Although overall rates of ASM use declined after the intervention, the change was not statistically significant, and was not of the same magnitude as the decline in rates of use for the purpose of stress ulcer prophylaxis. This suggests that our intervention, in part, led to substitution of 1 indication for another. The indication that increased the most after rollout on both campuses was continuing preadmission medication. There are at least 2 possibilities for this finding: (1) the intervention prompted physicians to more accurately record the indication, or (2) physicians falsified the indication in order to execute the order. To explore these possibilities, we reviewed the charts of a random sample of 100 admissions during each of the baseline and intervention phases where continuing preadmission medication was selected as an indication for an ASM order. We found that 6/100 orders in the baseline phase and 7/100 orders in the intervention phase incorrectly indicated that the patient was on ASM prior to admission (P=0.77). This suggests that scenario 1 above is the more likely explanation for the increased use of this indication, and that the intervention, in part, simply unmasked the true rate of use at our medical center for the isolated purpose of stress ulcer prophylaxis.
These findings have implications for others attempting to use computerized CDS to better understand physician prescribing. They suggest that information collected through computer‐based interaction with clinicians at the point of care may not always be accurate or complete. As institutions increasingly use similar interventions to drive behavior, information obtained from such interaction should be validated, and when possible, patient outcomes should be measured.
Our findings suggest that rates of ASM use for the purpose of stress ulcer prophylaxis in the hospital may have declined over the last decade. Studies demonstrating that up to 70% of inpatient use of ASM was inappropriate were conducted 5 to 10 years ago.[1, 2, 3, 4, 5] Since then, studies have demonstrated risk of nosocomial infections in patients on ASM.[9, 10, 11] It is possible that the low rate of use for stress ulcer prophylaxis in our study is attributable to awareness of the risks of these medications, and limited our ability to detect differences in overall use. It is also possible, however, that a portion of the admissions with continuation of preadmission medication as the indication were started on these medications during a prior hospitalization. Thus, some portion of preadmission use is likely to represent failed medication reconciliation during a prior discharge. In this context, hospitalization may serve as an opportunity to evaluate the indication for ASM use even when these medications show up as preadmission medications.
There are additional limitations. First, the single‐center nature limits generalizability. Second, the first phase of our study, designed to obtain baseline data on ASM use, may have led to changes in prescribing prior to implementation of our CDS tool. Additionally, we did not validate the accuracy of each of the chosen indications, or the site of initial prescription in the case of preadmission exposure. Last, our study was not powered to investigate changes in rates of nosocomial gastrointestinal bleeding or nosocomial pneumonia owing to the infrequent nature of these complications.
In conclusion, we designed a simple computerized CDS intervention that was associated with a reduction in ASM use for stress ulcer prophylaxis in patients outside the ICU, a nonsignificant reduction in overall use, and no change in use on discharge. The majority of inpatient use represented continuation of preadmission medication, suggesting that interventions to improve the appropriateness of ASM prescribing should span the continuum of care. Future studies should investigate whether it is worthwhile and appropriate to reevaluate continued use of preadmission ASM during an inpatient stay.
Acknowledgements
The authors acknowledge Joshua Guthermann, MBA, and Jane Hui Chen Lim, MBA, for their assistance in the early phases of data analysis, and Long H. Ngo, PhD, for his statistical consultation.
Disclosures: Dr. Herzig was funded by a Young Clinician Research Award from the Center for Integration of Medicine and Innovative Technology, a nonprofit consortium of Boston teaching hospitals and universities, and grant number K23AG042459 from the National Institute on Aging. Dr. Marcantonio was funded by grant number K24AG035075 from the National Institute on Aging. The funding organizations had no involvement in any aspect of the study, including design, conduct, and reporting of the study. Dr. Herzig had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs. Herzig and Marcantonio were responsible for the study concept and design. Drs. Herzig, Feinbloom, Howell, and Ms. Adra and Mr. Afonso were responsible for the acquisition of data. Drs. Herzig, Howell, Marcantonio, and Mr. Guess were responsible for the analysis and interpretation of the data. Dr. Herzig drafted the manuscript. All of the authors participated in the critical revision of the manuscript for important intellectual content. Drs. Herzig and Marcantonio were responsible for study supervision. The authors report no conflicts of interest.
Prior studies have found that up to 70% of acid‐suppressive medication (ASM) use in the hospital is not indicated, most commonly for stress ulcer prophylaxis in patients outside of the intensive care unit (ICU).[1, 2, 3, 4, 5, 6, 7] Accordingly, reducing inappropriate use of ASM for stress ulcer prophylaxis in hospitalized patients is 1 of the 5 opportunities for improved healthcare value identified by the Society of Hospital Medicine as part of the American Board of Internal Medicine's Choosing Wisely campaign.[8]
We designed and tested a computerized clinical decision support (CDS) intervention with the goal of reducing use of ASM for stress ulcer prophylaxis in hospitalized patients outside the ICU at an academic medical center.
METHODS
Study Design
We conducted a quasiexperimental study using an interrupted time series to analyze data collected prospectively during clinical care before and after implementation of our intervention. The study was deemed a quality improvement initiative by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations/Institutional Review Board.
Patients and Setting
All admissions >18 years of age to a 649‐bed academic medical center in Boston, Massachusetts from September 12, 2011 through July 3, 2012 were included. The medical center consists of an East and West Campus, located across the street from each other. Care for both critically ill and noncritically ill medical and surgical patients occurs on both campuses. Differences include greater proportions of patients with gastrointestinal and oncologic conditions on the East Campus, and renal and cardiac conditions on the West Campus. Additionally, labor and delivery occurs exclusively on the East Campus, and the density of ICU beds is greater on the West Campus. Both campuses utilize a computer‐based provider order entry (POE) system.
Intervention
Our study was implemented in 2 phases (Figure 1).

Baseline Phase
The purpose of the first phase was to obtain baseline data on ASM use prior to implementing our CDS tool designed to influence prescribing. During this baseline phase, a computerized prompt was activated through our POE system whenever a clinician initiated an order for ASM (histamine 2 receptor antagonists or proton pump inhibitors), asking the clinician to select the reason/reasons for the order based on the following predefined response options: (1) active/recent upper gastrointestinal bleed, (2) continuing preadmission medication, (3) Helicobacter pylori treatment, (4) prophylaxis in patient on medications that increase bleeding risk, (5) stress ulcer prophylaxis, (6) suspected/known peptic ulcer disease, gastritis, esophagitis, gastroesophageal reflux disease, and (7) other, with a free‐text box to input the indication. This indications prompt was rolled out to the entire medical center on September 12, 2011 and remained active for the duration of the study period.
Intervention Phase
In the second phase of the study, if a clinician selected stress ulcer prophylaxis as the only indication for ordering ASM, a CDS prompt alerted the clinician that Stress ulcer prophylaxis is not recommended for patients outside of the intensive care unit (ASHP Therapeutic Guidelines on Stress Ulcer Prophylaxis. Am J Health‐Syst Pharm. 1999, 56:347‐79). The clinician could then select either, For use in ICUOrder Medication, Choose Other Indication, or Cancel Order. This CDS prompt was rolled out in a staggered manner to the East Campus on January 3, 2012, followed by the West Campus on April 3, 2012.
Outcomes
The primary outcome was the rate of ASM use with stress ulcer prophylaxis selected as the only indication in a patient located outside of the ICU. We confirmed patient location in the 24 hours after the order was placed. Secondary outcomes were rates of overall ASM use, defined via pharmacy charges, and rates of use on discharge.
Statistical Analysis
To assure stable measurement of trends, we studied at least 3 months before and after the intervention on each campus. We used the Fisher exact test to compare the rates of our primary and secondary outcomes before and after the intervention, stratified by campus. For our primary outcomeat least 1 ASM order with stress ulcer prophylaxis selected as the only indication during hospitalizationwe developed a logistic regression model with a generalized estimating equation and exchangeable working correlation structure to control for admission characteristics (Table 1) and repeated admissions. Using a term for the interaction between time and the intervention, this model allowed us to assess changes in level and trend for the odds of a patient receiving at least 1 ASM order with stress ulcer prophylaxis as the only indication before, compared to after the intervention, stratified by campus. We used a 2‐sided type I error of <0.05 to indicate statistical significance.
Study Phase | Campus | |||
---|---|---|---|---|
East | West | |||
Baseline, n=3,747 | Intervention, n=6,191 | Baseline, n=11,177 | Intervention, n=5,285 | |
| ||||
Age, y, mean (SD) | 48.1 (18.5) | 47.7 (18.2) | 61.0 (18.0) | 60.3 (18.1) |
Gender, no. (%) | ||||
Female | 2744 (73.2%) | 4542 (73.4%) | 5551 (49.7%) | 2653 (50.2%) |
Male | 1003 (26.8%) | 1649 (26.6%) | 5626 (50.3%) | 2632 (49.8%) |
Race, no. (%) | ||||
Asian | 281 (7.5%) | 516 (8.3%) | 302 (2.7%) | 156 (3%) |
Black | 424 (11.3%) | 667 (10.8%) | 1426 (12.8%) | 685 (13%) |
Hispanic | 224 (6%) | 380 (6.1%) | 619 (5.5%) | 282 (5.3%) |
Other | 378 (10.1%) | 738 (11.9%) | 776 (6.9%) | 396 (7.5%) |
White | 2440 (65.1%) | 3890 (62.8%) | 8054 (72%) | 3766 (71.3%) |
Charlson score, mean (SD) | 0.8 (1.1) | 0.7 (1.1) | 1.5 (1.4) | 1.4 (1.4) |
Gastrointestinal bleeding, no. (%)* | 49 (1.3%) | 99 (1.6%) | 385 (3.4%) | 149 (2.8%) |
Other medication exposures, no. (%) | ||||
Therapeutic anticoagulant | 218 (5.8%) | 409 (6.6%) | 2242 (20.1%) | 1022 (19.3%) |
Prophylactic anticoagulant | 1081 (28.8%) | 1682 (27.2%) | 5999 (53.7%) | 2892 (54.7%) |
NSAID | 1899 (50.7%) | 3141 (50.7%) | 1248 (11.2%) | 575 (10.9%) |
Antiplatelet | 313 (8.4%) | 585 (9.4%) | 4543 (40.6%) | 2071 (39.2%) |
Admitting department, no. (%) | ||||
Surgery | 2507 (66.9%) | 4146 (67%) | 3255 (29.1%) | 1578 (29.9%) |
Nonsurgery | 1240 (33.1%) | 2045 (33%) | 7922 (70.9%) | 3707 (70.1%) |
Any ICU Stay, no. (%) | 217 (5.8%) | 383 (6.2%) | 2786 (24.9%) | 1252 (23.7%) |
RESULTS
There were 26,400 adult admissions during the study period, and 22,330 discrete orders for ASM. Overall, 12,056 (46%) admissions had at least 1 charge for ASM. Admission characteristics were similar before and after the intervention on each campus (Table 1).
Table 2 shows the indications chosen each time ASM was ordered, stratified by campus and study phase. Although selection of stress ulcer prophylaxis decreased on both campuses during the intervention phase, selection of continuing preadmission medication increased.
Study Phase | Campus | |||
---|---|---|---|---|
East | West | |||
Baseline, n=2,062 | Intervention, n=3,243 | Baseline, n=12,038 | Intervention, n=4,987 | |
| ||||
Indication* | ||||
Continuing preadmission medication | 910 (44.1%) | 1695 (52.3%) | 5597 (46.5%) | 2802 (56.2%) |
PUD, gastritis, esophagitis, GERD | 440 (21.3%) | 797 (24.6%) | 1303 (10.8%) | 582 (11.7%) |
Stress ulcer prophylaxis | 298 (14.4%) | 100 (3.1%) | 2659 (22.1%) | 681 (13.7%) |
Prophylaxis in patient on medications that increase bleeding risk | 226 (11.0%) | 259 (8.0%) | 965 (8.0%) | 411 (8.2%) |
Active/recent gastrointestinal bleed | 154 (7.5%) | 321 (9.9%) | 1450 (12.0%) | 515 (10.3) |
Helicobacter pylori treatment | 6 (0.2%) | 2 (0.1%) | 43 (0.4%) | 21 (0.4%) |
Other | 111 (5.4%) | 156 (4.8%) | 384 (3.2%) | 186 (3.7%) |
Table 3 shows the unadjusted comparison of outcomes between baseline and intervention phases on each campus. Use of ASM with stress ulcer prophylaxis as the only indication decreased during the intervention phase on both campuses. There was a nonsignificant reduction in overall rates of use on both campuses, and use on discharge was unchanged. Figure 2 demonstrates the unadjusted and modeled monthly rates of admissions with at least 1 ASM order with stress ulcer prophylaxis selected as the only indication, stratified by campus. After adjusting for the admission characteristics in Table 1, during the intervention phase on both campuses there was a significant immediate reduction in the odds of receiving an ASM with stress ulcer prophylaxis selected as the only indication (East Campus odds ratio [OR]: 0.36, 95% confidence interval [CI]: 0.180.71; West Campus OR: 0.41, 95% CI: 0.280.60), and a significant change in trend compared to the baseline phase (East Campus 1.5% daily decrease in odds of receiving ASM solely for stress ulcer prophylaxis, P=0.002; West Campus 0.9% daily decrease in odds of receiving ASM solely for stress ulcer prophylaxis, P=0.02).
Study Phase | Campus | |||||
---|---|---|---|---|---|---|
East | West | |||||
Baseline, n=3,747 | Intervention, n=6,191 | P Value* | Baseline, n=11,177 | Intervention, n=5,285 | P Value* | |
| ||||||
Outcome | ||||||
Any inappropriate acid‐suppressive exposure | 4.0% | 0.6% | <0.001 | 7.7% | 2.2% | <0.001 |
Any acid‐suppressive exposure | 33.1% | 31.8% | 0.16 | 54.5% | 52.9% | 0.05 |
Discharged on acid‐suppressive medication | 18.9% | 19.6% | 0.40 | 34.7% | 34.7% | 0.95 |

DISCUSSION
In this single‐center study, we found that a computerized CDS intervention resulted in a significant reduction in use of ASM for the sole purpose of stress ulcer prophylaxis in patients outside the ICU, a nonsignificant reduction in overall use, and no change in use on discharge. We found low rates of use for the isolated purpose of stress ulcer prophylaxis even before the intervention, and continuing preadmission medication was the most commonly selected indication throughout the study.
Although overall rates of ASM use declined after the intervention, the change was not statistically significant, and was not of the same magnitude as the decline in rates of use for the purpose of stress ulcer prophylaxis. This suggests that our intervention, in part, led to substitution of 1 indication for another. The indication that increased the most after rollout on both campuses was continuing preadmission medication. There are at least 2 possibilities for this finding: (1) the intervention prompted physicians to more accurately record the indication, or (2) physicians falsified the indication in order to execute the order. To explore these possibilities, we reviewed the charts of a random sample of 100 admissions during each of the baseline and intervention phases where continuing preadmission medication was selected as an indication for an ASM order. We found that 6/100 orders in the baseline phase and 7/100 orders in the intervention phase incorrectly indicated that the patient was on ASM prior to admission (P=0.77). This suggests that scenario 1 above is the more likely explanation for the increased use of this indication, and that the intervention, in part, simply unmasked the true rate of use at our medical center for the isolated purpose of stress ulcer prophylaxis.
These findings have implications for others attempting to use computerized CDS to better understand physician prescribing. They suggest that information collected through computer‐based interaction with clinicians at the point of care may not always be accurate or complete. As institutions increasingly use similar interventions to drive behavior, information obtained from such interaction should be validated, and when possible, patient outcomes should be measured.
Our findings suggest that rates of ASM use for the purpose of stress ulcer prophylaxis in the hospital may have declined over the last decade. Studies demonstrating that up to 70% of inpatient use of ASM was inappropriate were conducted 5 to 10 years ago.[1, 2, 3, 4, 5] Since then, studies have demonstrated risk of nosocomial infections in patients on ASM.[9, 10, 11] It is possible that the low rate of use for stress ulcer prophylaxis in our study is attributable to awareness of the risks of these medications, and limited our ability to detect differences in overall use. It is also possible, however, that a portion of the admissions with continuation of preadmission medication as the indication were started on these medications during a prior hospitalization. Thus, some portion of preadmission use is likely to represent failed medication reconciliation during a prior discharge. In this context, hospitalization may serve as an opportunity to evaluate the indication for ASM use even when these medications show up as preadmission medications.
There are additional limitations. First, the single‐center nature limits generalizability. Second, the first phase of our study, designed to obtain baseline data on ASM use, may have led to changes in prescribing prior to implementation of our CDS tool. Additionally, we did not validate the accuracy of each of the chosen indications, or the site of initial prescription in the case of preadmission exposure. Last, our study was not powered to investigate changes in rates of nosocomial gastrointestinal bleeding or nosocomial pneumonia owing to the infrequent nature of these complications.
In conclusion, we designed a simple computerized CDS intervention that was associated with a reduction in ASM use for stress ulcer prophylaxis in patients outside the ICU, a nonsignificant reduction in overall use, and no change in use on discharge. The majority of inpatient use represented continuation of preadmission medication, suggesting that interventions to improve the appropriateness of ASM prescribing should span the continuum of care. Future studies should investigate whether it is worthwhile and appropriate to reevaluate continued use of preadmission ASM during an inpatient stay.
Acknowledgements
The authors acknowledge Joshua Guthermann, MBA, and Jane Hui Chen Lim, MBA, for their assistance in the early phases of data analysis, and Long H. Ngo, PhD, for his statistical consultation.
Disclosures: Dr. Herzig was funded by a Young Clinician Research Award from the Center for Integration of Medicine and Innovative Technology, a nonprofit consortium of Boston teaching hospitals and universities, and grant number K23AG042459 from the National Institute on Aging. Dr. Marcantonio was funded by grant number K24AG035075 from the National Institute on Aging. The funding organizations had no involvement in any aspect of the study, including design, conduct, and reporting of the study. Dr. Herzig had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs. Herzig and Marcantonio were responsible for the study concept and design. Drs. Herzig, Feinbloom, Howell, and Ms. Adra and Mr. Afonso were responsible for the acquisition of data. Drs. Herzig, Howell, Marcantonio, and Mr. Guess were responsible for the analysis and interpretation of the data. Dr. Herzig drafted the manuscript. All of the authors participated in the critical revision of the manuscript for important intellectual content. Drs. Herzig and Marcantonio were responsible for study supervision. The authors report no conflicts of interest.
- Stress ulcer prophylaxis in hospitalized patients not in intensive care units. Am J Health Syst Pharm. 2007;64(13):1396–1400. , .
- Magnitude and economic impact of inappropriate use of stress ulcer prophylaxis in non‐ICU hospitalized patients. Am J Gastroenterol. 2006;101(10):2200–2205. , .
- Stress‐ulcer prophylaxis for general medical patients: a review of the evidence. J Hosp Med. 2007;2(2):86–92. , .
- Hospital use of acid‐suppressive medications and its fall‐out on prescribing in general practice: a 1‐month survey. Aliment Pharmacol Ther. 2003;17(12):1503–1506. , , , et al.
- Inadequate use of acid‐suppressive therapy in hospitalized patients and its implications for general practice. Dig Dis Sci. 2005;50(12):2307–2311. , , , , , .
- Brief report: reducing inappropriate usage of stress ulcer prophylaxis among internal medicine residents. A practice‐based educational intervention. J Gen Intern Med. 2006;21(5):498–500. , .
- Inappropriate continuation of stress ulcer prophylactic therapy after discharge. Ann Pharmacother. 2007;41(10):1611–1616. , , .
- Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486–492. , , , et al.
- Risk of Clostridium difficile diarrhea among hospital inpatients prescribed proton pump inhibitors: cohort and case‐control studies. CMAJ. 2004;171(1):33–38. , , , , .
- Iatrogenic gastric acid suppression and the risk of nosocomial Clostridium difficile infection. Arch Intern Med. 2010;170(9):784–790. , , , et al.
- Acid‐suppressive medication use and the risk for hospital‐acquired pneumonia. JAMA. 2009;301(20):2120–2128. , , , .
- Healthcare Cost and Utilization Project. Clinical classifications software (CCS) for ICD‐9‐CM. December 2009. Agency for Healthcare Research and Quality, Rockville, MD. Available at: www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed June 18, 2014.
- Stress ulcer prophylaxis in hospitalized patients not in intensive care units. Am J Health Syst Pharm. 2007;64(13):1396–1400. , .
- Magnitude and economic impact of inappropriate use of stress ulcer prophylaxis in non‐ICU hospitalized patients. Am J Gastroenterol. 2006;101(10):2200–2205. , .
- Stress‐ulcer prophylaxis for general medical patients: a review of the evidence. J Hosp Med. 2007;2(2):86–92. , .
- Hospital use of acid‐suppressive medications and its fall‐out on prescribing in general practice: a 1‐month survey. Aliment Pharmacol Ther. 2003;17(12):1503–1506. , , , et al.
- Inadequate use of acid‐suppressive therapy in hospitalized patients and its implications for general practice. Dig Dis Sci. 2005;50(12):2307–2311. , , , , , .
- Brief report: reducing inappropriate usage of stress ulcer prophylaxis among internal medicine residents. A practice‐based educational intervention. J Gen Intern Med. 2006;21(5):498–500. , .
- Inappropriate continuation of stress ulcer prophylactic therapy after discharge. Ann Pharmacother. 2007;41(10):1611–1616. , , .
- Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486–492. , , , et al.
- Risk of Clostridium difficile diarrhea among hospital inpatients prescribed proton pump inhibitors: cohort and case‐control studies. CMAJ. 2004;171(1):33–38. , , , , .
- Iatrogenic gastric acid suppression and the risk of nosocomial Clostridium difficile infection. Arch Intern Med. 2010;170(9):784–790. , , , et al.
- Acid‐suppressive medication use and the risk for hospital‐acquired pneumonia. JAMA. 2009;301(20):2120–2128. , , , .
- Healthcare Cost and Utilization Project. Clinical classifications software (CCS) for ICD‐9‐CM. December 2009. Agency for Healthcare Research and Quality, Rockville, MD. Available at: www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed June 18, 2014.
RRTs in Teaching Hospitals
In this issue of the Journal of Hospital Medicine, Butcher and colleagues report on residents' perceptions of a rapid response team's (RRT) impact on their training.[1] RRTs mobilize key clinicians in an attempt to rescue acutely decompensating hospitalized patients. Early recognition is essential, and most systems allow any concerned health professional to activate the RRT. Although the evidence for benefit is somewhat controversial,[2, 3] an overwhelming majority of hospitals have implemented RRTs.[4, 5]
The use of RRTs in teaching hospitals raises important concerns. The ability of nurses and other professionals to activate the RRT without need for prior approval from a physician could potentially undermine resident physician autonomy. Residents may feel that their clinical judgment has been usurped or second guessed. Whether nurse led or physician led, RRTs always introduce new members to the care team.[6] These new team members share in decision making, which may theoretically reduce residents' opportunities to hone their decision‐making skills when caring for potentially critically ill patients.
Despite these potential disadvantages, Butcher and colleagues report that the vast majority of residents found working with the RRT to be a valuable educational experience and disagreed that the RRT decreased their clinical autonomy. Interestingly, surgical residents were less likely to agree that working with the RRT was a valuable educational experience and much more likely to feel that nurses should contact them before activating the RRT.
The results of the study by Butcher et al. highlight several evolving paradigms in medical education and quality improvement. Over the past 10 to 15 years, and fostered in large part by Accreditation Council for Graduate Medical Education (ACGME) duty‐hour revisions,[7] teaching hospitals have moved away from the traditional practice of using residents primarily to fill their clinical service needs to an approach that treats residents more as learners. Resident training requires clinical care, but the provision of clinical care in teaching hospitals does not necessarily require residents. At the same time, healthcare organizations have moved away from the traditional culture characterized by reliance on individual skill, physician autonomy, and steep hierarchies, to an enlightened culture emphasizing teamwork with flattened hierarchies and systems redesigned to provide safe and effective care.[8]
For the most part, the paradigm shifts in medical education and quality improvement have been aligned. In fact, the primary goal of duty‐hour policy revisions was to improve patient safety.[9] Yet, Butcher and colleagues' study highlights the need to continuously and deliberately integrate our efforts to enhance medical education and quality of care, and more rigorously study the effects. Rather than be pleasantly surprised that residents understand the intrinsic value of an RRT to patient care and their education, we should ensure that residents understand the rationale for an RRT and consider using the RRT to complement other efforts to educate resident physicians in managing unstable patients. RRTs introduce a wonderful opportunity to develop novel interprofessional curricula. Learning objectives should include the management of common clinical syndromes represented in RRT calls, but should also focus on communication, leadership, and other essential teamwork skills. Simulation‐based training is an ideal teaching strategy for these objectives, and prior studies support the effectiveness of this approach.[10, 11]
The ACGME has now implemented the Next Accreditation System (NAS) across all specialties. Of the 22 reporting milestones within internal medicine, 12 relate directly to quality improvement and patient safety objectives, whereas 6 relate directly to pathophysiology and disease management.[12] Educating residents on systems of care is further highlighted by the Clinical Learning Environment Review (CLER), a key component of the NAS. The CLER program uses site visits to identify teaching hospitals' efforts to engage residents in 6 focus areas: patient safety; healthcare quality; transitions of care; supervision; duty hours, fatigue management, and mitigation; and professionalism.[13] CLER site visits include discussions and observations with hospital executive leadership, residents, graduate medical education leadership, nursing, and other hospital staff. The CLER program raises the bar for integrating medical education and quality improvement efforts even further. Quality improvement activities that previously supported an informal curriculum must now be made explicit to, and deliberately engage, our residents. Teaching hospitals are being tasked with including residents in safety initiatives and on all quality committees, especially those with cross‐departmental boundaries such as the Emergency Response Team/RRT Committee. Residents should meaningfully participate, and whenever possible, lead quality improvement projects, the focus of which may ideally be identified by residents themselves. An important resource for medical educators is the Quality and Safety Educators Academy, a program developed by the Society of Hospital Medicine and the Alliance for Academic Internal Medicine, which provides educators with the knowledge and tools to integrate quality improvement and patient safety objectives into their training programs.[14]
In conclusion, we are reassured that residents understand the intrinsic value of an RRT to patient care and their education. We encourage medical educators to use RRTs as an opportunity to develop interprofessional curricula, including those that aim to enhance teamwork skills. Beyond curricular innovation, quality‐improvement activities in teaching hospitals must deliberately engage our residents at every level of the organization.
Disclosure
Disclosure: Nothing to report.
- The effect of a rapid response team on resident perceptions of education and autonomy. J Hosp Med. 2015;10(1):8–12. , , , .
- Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):18–26. , , , , .
- Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):417–425. , , , , , .
- Hospital cardiac arrest resuscitation practice in the United States: a nationally representative survey. J Hosp Med. 2014;9(6):353–357. , , , et al.
- Achieving a safe culture: theory and practice. Work Stress. 1998;12(3):293–306. .
- Rapid response systems in adult academic medical centers. Jt Comm J Qual Patient Saf. 2009;35(9):475–482, 437. , , , .
- The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3. , , .
- The AHRQ hospital survey on patient safety culture: a tool to plan and evaluate patient safety programs. In: Henriksen K, Battles JB, Keyes MA, et al., eds. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign). Rockville, MD: Agency for Healthcare Research and Quality; 2008. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43699. Accessed November 4, 2014. , , , et al.
- The ACGME 2011 Duty Hour Standards: Enhancing Quality of Care, Supervision, and Resident Professional Development. Chicago, IL: Accreditation Council for Graduate Medical Education; 2011.
- Improving medical emergency team (MET) performance using a novel curriculum and a computerized human patient simulator. Qual Saf Health Care. 2005;14(5):326–331. , , , , .
- System‐based interprofessional simulation‐based training program increases awareness and use of rapid response teams. Jt Comm J Qual Patient Saf. 2014;40(6):279–287. , , , .
- Internal Medicine Milestone Group. The Internal Medicine Milestone Project. A Joint Initiative of the Accreditation Council for Graduate Medical Education and The American Board of Internal Medicine. Available at: https://www.acgme.org/acgmeweb/Portals/0/PDFs/Milestones/InternalMedicineMilestones.pdf. Accessed November 4, 2014.
- The clinical learning environment: the foundation of graduate medical education. JAMA. 2013;309(16):1687–1688. , , .
- The Quality and Safety Educators Academy: fulfilling an unmet need for faculty development. Am J Med Qual. 2014;29(1):5–12. , , , et al.
In this issue of the Journal of Hospital Medicine, Butcher and colleagues report on residents' perceptions of a rapid response team's (RRT) impact on their training.[1] RRTs mobilize key clinicians in an attempt to rescue acutely decompensating hospitalized patients. Early recognition is essential, and most systems allow any concerned health professional to activate the RRT. Although the evidence for benefit is somewhat controversial,[2, 3] an overwhelming majority of hospitals have implemented RRTs.[4, 5]
The use of RRTs in teaching hospitals raises important concerns. The ability of nurses and other professionals to activate the RRT without need for prior approval from a physician could potentially undermine resident physician autonomy. Residents may feel that their clinical judgment has been usurped or second guessed. Whether nurse led or physician led, RRTs always introduce new members to the care team.[6] These new team members share in decision making, which may theoretically reduce residents' opportunities to hone their decision‐making skills when caring for potentially critically ill patients.
Despite these potential disadvantages, Butcher and colleagues report that the vast majority of residents found working with the RRT to be a valuable educational experience and disagreed that the RRT decreased their clinical autonomy. Interestingly, surgical residents were less likely to agree that working with the RRT was a valuable educational experience and much more likely to feel that nurses should contact them before activating the RRT.
The results of the study by Butcher et al. highlight several evolving paradigms in medical education and quality improvement. Over the past 10 to 15 years, and fostered in large part by Accreditation Council for Graduate Medical Education (ACGME) duty‐hour revisions,[7] teaching hospitals have moved away from the traditional practice of using residents primarily to fill their clinical service needs to an approach that treats residents more as learners. Resident training requires clinical care, but the provision of clinical care in teaching hospitals does not necessarily require residents. At the same time, healthcare organizations have moved away from the traditional culture characterized by reliance on individual skill, physician autonomy, and steep hierarchies, to an enlightened culture emphasizing teamwork with flattened hierarchies and systems redesigned to provide safe and effective care.[8]
For the most part, the paradigm shifts in medical education and quality improvement have been aligned. In fact, the primary goal of duty‐hour policy revisions was to improve patient safety.[9] Yet, Butcher and colleagues' study highlights the need to continuously and deliberately integrate our efforts to enhance medical education and quality of care, and more rigorously study the effects. Rather than be pleasantly surprised that residents understand the intrinsic value of an RRT to patient care and their education, we should ensure that residents understand the rationale for an RRT and consider using the RRT to complement other efforts to educate resident physicians in managing unstable patients. RRTs introduce a wonderful opportunity to develop novel interprofessional curricula. Learning objectives should include the management of common clinical syndromes represented in RRT calls, but should also focus on communication, leadership, and other essential teamwork skills. Simulation‐based training is an ideal teaching strategy for these objectives, and prior studies support the effectiveness of this approach.[10, 11]
The ACGME has now implemented the Next Accreditation System (NAS) across all specialties. Of the 22 reporting milestones within internal medicine, 12 relate directly to quality improvement and patient safety objectives, whereas 6 relate directly to pathophysiology and disease management.[12] Educating residents on systems of care is further highlighted by the Clinical Learning Environment Review (CLER), a key component of the NAS. The CLER program uses site visits to identify teaching hospitals' efforts to engage residents in 6 focus areas: patient safety; healthcare quality; transitions of care; supervision; duty hours, fatigue management, and mitigation; and professionalism.[13] CLER site visits include discussions and observations with hospital executive leadership, residents, graduate medical education leadership, nursing, and other hospital staff. The CLER program raises the bar for integrating medical education and quality improvement efforts even further. Quality improvement activities that previously supported an informal curriculum must now be made explicit to, and deliberately engage, our residents. Teaching hospitals are being tasked with including residents in safety initiatives and on all quality committees, especially those with cross‐departmental boundaries such as the Emergency Response Team/RRT Committee. Residents should meaningfully participate, and whenever possible, lead quality improvement projects, the focus of which may ideally be identified by residents themselves. An important resource for medical educators is the Quality and Safety Educators Academy, a program developed by the Society of Hospital Medicine and the Alliance for Academic Internal Medicine, which provides educators with the knowledge and tools to integrate quality improvement and patient safety objectives into their training programs.[14]
In conclusion, we are reassured that residents understand the intrinsic value of an RRT to patient care and their education. We encourage medical educators to use RRTs as an opportunity to develop interprofessional curricula, including those that aim to enhance teamwork skills. Beyond curricular innovation, quality‐improvement activities in teaching hospitals must deliberately engage our residents at every level of the organization.
Disclosure
Disclosure: Nothing to report.
In this issue of the Journal of Hospital Medicine, Butcher and colleagues report on residents' perceptions of a rapid response team's (RRT) impact on their training.[1] RRTs mobilize key clinicians in an attempt to rescue acutely decompensating hospitalized patients. Early recognition is essential, and most systems allow any concerned health professional to activate the RRT. Although the evidence for benefit is somewhat controversial,[2, 3] an overwhelming majority of hospitals have implemented RRTs.[4, 5]
The use of RRTs in teaching hospitals raises important concerns. The ability of nurses and other professionals to activate the RRT without need for prior approval from a physician could potentially undermine resident physician autonomy. Residents may feel that their clinical judgment has been usurped or second guessed. Whether nurse led or physician led, RRTs always introduce new members to the care team.[6] These new team members share in decision making, which may theoretically reduce residents' opportunities to hone their decision‐making skills when caring for potentially critically ill patients.
Despite these potential disadvantages, Butcher and colleagues report that the vast majority of residents found working with the RRT to be a valuable educational experience and disagreed that the RRT decreased their clinical autonomy. Interestingly, surgical residents were less likely to agree that working with the RRT was a valuable educational experience and much more likely to feel that nurses should contact them before activating the RRT.
The results of the study by Butcher et al. highlight several evolving paradigms in medical education and quality improvement. Over the past 10 to 15 years, and fostered in large part by Accreditation Council for Graduate Medical Education (ACGME) duty‐hour revisions,[7] teaching hospitals have moved away from the traditional practice of using residents primarily to fill their clinical service needs to an approach that treats residents more as learners. Resident training requires clinical care, but the provision of clinical care in teaching hospitals does not necessarily require residents. At the same time, healthcare organizations have moved away from the traditional culture characterized by reliance on individual skill, physician autonomy, and steep hierarchies, to an enlightened culture emphasizing teamwork with flattened hierarchies and systems redesigned to provide safe and effective care.[8]
For the most part, the paradigm shifts in medical education and quality improvement have been aligned. In fact, the primary goal of duty‐hour policy revisions was to improve patient safety.[9] Yet, Butcher and colleagues' study highlights the need to continuously and deliberately integrate our efforts to enhance medical education and quality of care, and more rigorously study the effects. Rather than be pleasantly surprised that residents understand the intrinsic value of an RRT to patient care and their education, we should ensure that residents understand the rationale for an RRT and consider using the RRT to complement other efforts to educate resident physicians in managing unstable patients. RRTs introduce a wonderful opportunity to develop novel interprofessional curricula. Learning objectives should include the management of common clinical syndromes represented in RRT calls, but should also focus on communication, leadership, and other essential teamwork skills. Simulation‐based training is an ideal teaching strategy for these objectives, and prior studies support the effectiveness of this approach.[10, 11]
The ACGME has now implemented the Next Accreditation System (NAS) across all specialties. Of the 22 reporting milestones within internal medicine, 12 relate directly to quality improvement and patient safety objectives, whereas 6 relate directly to pathophysiology and disease management.[12] Educating residents on systems of care is further highlighted by the Clinical Learning Environment Review (CLER), a key component of the NAS. The CLER program uses site visits to identify teaching hospitals' efforts to engage residents in 6 focus areas: patient safety; healthcare quality; transitions of care; supervision; duty hours, fatigue management, and mitigation; and professionalism.[13] CLER site visits include discussions and observations with hospital executive leadership, residents, graduate medical education leadership, nursing, and other hospital staff. The CLER program raises the bar for integrating medical education and quality improvement efforts even further. Quality improvement activities that previously supported an informal curriculum must now be made explicit to, and deliberately engage, our residents. Teaching hospitals are being tasked with including residents in safety initiatives and on all quality committees, especially those with cross‐departmental boundaries such as the Emergency Response Team/RRT Committee. Residents should meaningfully participate, and whenever possible, lead quality improvement projects, the focus of which may ideally be identified by residents themselves. An important resource for medical educators is the Quality and Safety Educators Academy, a program developed by the Society of Hospital Medicine and the Alliance for Academic Internal Medicine, which provides educators with the knowledge and tools to integrate quality improvement and patient safety objectives into their training programs.[14]
In conclusion, we are reassured that residents understand the intrinsic value of an RRT to patient care and their education. We encourage medical educators to use RRTs as an opportunity to develop interprofessional curricula, including those that aim to enhance teamwork skills. Beyond curricular innovation, quality‐improvement activities in teaching hospitals must deliberately engage our residents at every level of the organization.
Disclosure
Disclosure: Nothing to report.
- The effect of a rapid response team on resident perceptions of education and autonomy. J Hosp Med. 2015;10(1):8–12. , , , .
- Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):18–26. , , , , .
- Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):417–425. , , , , , .
- Hospital cardiac arrest resuscitation practice in the United States: a nationally representative survey. J Hosp Med. 2014;9(6):353–357. , , , et al.
- Achieving a safe culture: theory and practice. Work Stress. 1998;12(3):293–306. .
- Rapid response systems in adult academic medical centers. Jt Comm J Qual Patient Saf. 2009;35(9):475–482, 437. , , , .
- The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3. , , .
- The AHRQ hospital survey on patient safety culture: a tool to plan and evaluate patient safety programs. In: Henriksen K, Battles JB, Keyes MA, et al., eds. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign). Rockville, MD: Agency for Healthcare Research and Quality; 2008. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43699. Accessed November 4, 2014. , , , et al.
- The ACGME 2011 Duty Hour Standards: Enhancing Quality of Care, Supervision, and Resident Professional Development. Chicago, IL: Accreditation Council for Graduate Medical Education; 2011.
- Improving medical emergency team (MET) performance using a novel curriculum and a computerized human patient simulator. Qual Saf Health Care. 2005;14(5):326–331. , , , , .
- System‐based interprofessional simulation‐based training program increases awareness and use of rapid response teams. Jt Comm J Qual Patient Saf. 2014;40(6):279–287. , , , .
- Internal Medicine Milestone Group. The Internal Medicine Milestone Project. A Joint Initiative of the Accreditation Council for Graduate Medical Education and The American Board of Internal Medicine. Available at: https://www.acgme.org/acgmeweb/Portals/0/PDFs/Milestones/InternalMedicineMilestones.pdf. Accessed November 4, 2014.
- The clinical learning environment: the foundation of graduate medical education. JAMA. 2013;309(16):1687–1688. , , .
- The Quality and Safety Educators Academy: fulfilling an unmet need for faculty development. Am J Med Qual. 2014;29(1):5–12. , , , et al.
- The effect of a rapid response team on resident perceptions of education and autonomy. J Hosp Med. 2015;10(1):8–12. , , , .
- Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):18–26. , , , , .
- Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):417–425. , , , , , .
- Hospital cardiac arrest resuscitation practice in the United States: a nationally representative survey. J Hosp Med. 2014;9(6):353–357. , , , et al.
- Achieving a safe culture: theory and practice. Work Stress. 1998;12(3):293–306. .
- Rapid response systems in adult academic medical centers. Jt Comm J Qual Patient Saf. 2009;35(9):475–482, 437. , , , .
- The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3. , , .
- The AHRQ hospital survey on patient safety culture: a tool to plan and evaluate patient safety programs. In: Henriksen K, Battles JB, Keyes MA, et al., eds. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign). Rockville, MD: Agency for Healthcare Research and Quality; 2008. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43699. Accessed November 4, 2014. , , , et al.
- The ACGME 2011 Duty Hour Standards: Enhancing Quality of Care, Supervision, and Resident Professional Development. Chicago, IL: Accreditation Council for Graduate Medical Education; 2011.
- Improving medical emergency team (MET) performance using a novel curriculum and a computerized human patient simulator. Qual Saf Health Care. 2005;14(5):326–331. , , , , .
- System‐based interprofessional simulation‐based training program increases awareness and use of rapid response teams. Jt Comm J Qual Patient Saf. 2014;40(6):279–287. , , , .
- Internal Medicine Milestone Group. The Internal Medicine Milestone Project. A Joint Initiative of the Accreditation Council for Graduate Medical Education and The American Board of Internal Medicine. Available at: https://www.acgme.org/acgmeweb/Portals/0/PDFs/Milestones/InternalMedicineMilestones.pdf. Accessed November 4, 2014.
- The clinical learning environment: the foundation of graduate medical education. JAMA. 2013;309(16):1687–1688. , , .
- The Quality and Safety Educators Academy: fulfilling an unmet need for faculty development. Am J Med Qual. 2014;29(1):5–12. , , , et al.
Clinical Decision‐Support Tool
The adoption of electronic health records (EHRs) in US hospitals continues to rise steeply, with nearly 60% of all hospitals having at least a basic EHR as of 2014.[1] EHRs bring with them the ability to inform and guide clinicians as they make decisions. In theory, this form of clinical decision support (CDS) ensures quality of care, reduces adverse events, and improves efficiency; in practice, experience in the field paints a mixed picture.[2, 3] This issue of the Journal of Hospital Medicine presents 3 examples of CDS that illustrate the distance between what we see as CDS' full potential and current limitations.
In the study by Herzig et al.[4] investigators took on the challenge of implementing stress ulcer prophylaxis guidelines developed by the Society of Hospital Medicine. The investigators first demonstrated that targeted electronic prompts captured patients' indications for acid suppressive therapy, and could be used to prohibit prescribers from ordering acid suppressive therapy among patients outside the intensive care unit (ICU) setting. Through an elegant interrupted time series study design deployed across 2 hospital campuses, the investigators were able to demonstrate immediate and clinically significant reduction in acid suppressive therapy outside the ICU. They further found that the impact of this reduction was augmented over time, suggesting that the electronic prompts had a sustained impact on provider ordering behavior. However, below the headlineand relevant to the limitations of CDSthe investigators noted that much of the reduction in the use of acid suppressive therapy for stress ulcer prophylaxis could be accounted for by providers' choice of another acceptable indication (eg, continuing preadmission medication). The authors speculated that the CDS intervention prompted providers to more accurately record the indication for acid suppressive therapy. It is also possible that providers simply chose an alternate indication to circumvent the decision‐support step. Perhaps as a result of these 2 offsetting factors, the actual use of acid suppressive therapy, regardless of indication, only decreased in a modest and statistically nonsignificant way, casting the true effectiveness of this CDS intervention into question.
Two other studies in this issue of the Journal of Hospital Medicine[5, 6] provide valuable insights into interactions between social and technical factors[7, 8, 9, 10] that determine the success or failure in the use of technology such as CDS to drive organizational performance. At the technical end of this sociotechnical spectrum, the study by Knight et al.[5] illustrated that a minimally configured and visually unintuitive medication decision‐support system resulted in a high number of alerts (approximately 17% of studied orders), leading to the well‐reported phenomena of alert fatigue and substantially lower response rate compared to those reported in the literature.[11, 12, 13] Moreover, the analysis suggested that response to these alerts were particularly muted among situations that were particularly high risk, including the patient being older, patient having a greater length of stay, care being delivered in the internal medicine service, resident physician being the prescriber, and the medication being on the Institute for Safe Medication Practices list of high‐alert medications. The investigators concluded that a redesign of the medication decision‐support system was needed.
The study by Chen et al.[6] illuminated how social factors pose challenges in implementing CDS. Investigators in this study were previously successful in using a combination of an education campaign and interruptive decision‐support prompts to reduce the inappropriate ordering of blood transfusions. However, even with a successful intervention, up to 30% of transfusions occurred outside of recommended guidelines. This finding prompted the investigators to analyze the free‐text reasons offered by providers for overriding the recommended guidelines. Two key patterns emerged from their structured analysis. First, many of the apparently inappropriate transfusions occurred under officially sanctioned protocols (such as stem cell transplant) that the computer system was not able to take into account in generating alerts. Second, many orders that reflected questionable practices were being entered by resident physicians, physician assistants, nurse practitioners, and nurses who were least empowered to challenge requests from senior staff.
Several practical and actionable lessons can be drawn from the 3 sets of investigators featured in this issue of the Journal of Hospital Medicine. First, all investigators defined metrics that should be tracked over time to demonstrate progress and to make iterative improvements; this discipline is needed in both academic and community settings to prioritize limited CDS resources in an objective and data‐driven way. Second, as the Herzig et al.[4] article illustrated, when it comes to evaluating the impact of CDS, we cannot be satisfied merely with process measures (eg, change in clinical documentation) at the expense of outcome measures (eg, decrease in inappropriate use of therapies). Third, as Chen et al.[6] recognized, CDS is but a component of an educational program to guide and alter clinical behavior, and must be deployed in conjunction with other educational tools such as newsletters, traditional lectures, or academic detailing. Fourth, clinicians with a stake in improving quality and safety should be on guard against the well‐documented phenomena of alert fatigue by ensuring their organization selects an appropriate framework for deciding which CDS alerts are activated andwhere possibledisplay the highest‐priority alerts in the most prominent and interruptive manner. Fifth, CDS must be maintained over time as clinical guidelines and clinicians' receptivity to each CDS evolve. Alerts that are not changing clinical behavior should either be modified or simply turned off. Sixth, free text entered as part of structured data entry (eg, while placing orders) or as reasons for overriding CDS (as in Chen et al.[6]) offer significant insights on how to optimize CDS, and should be monitored systematically on an ongoing basis to ensure the EMR addresses users' changing needs and mental models.
So what is the clinician with an interest in improving healthcare outcomes and organizational efficiency to do given CDS' limitations? One option is to wait for the science of CDS to further mature and have those advances embedded in the EMR at your organization. Another option might be to rely on the information technology and clinical informatics professionals at your organization to decide how CDS should be used locally. In 2014, these may be untenable choices for the following reasons. First, given the universal pressures to improve healthcare outcomes and contain costs,[14] healthcare organizations must use all available tools to achieve challenging performance goals. Second, as EMRs with CDS become commonplace, and as the 3 articles in this issue of the Journal of Hospital Medicine and others have illustrated, there are many opportunities to misuse or poorly implement CDS, with potentially dire consequences.[15] Third, design and deployment of effective CDS require information technology and informatics professionals to collaborate with clinicians to gauge the quality of EMR data used to drive CDS and clinicians' receptivity to CDS, illuminate the sociotechnical context in which to deploy the CDS, and champion the CDS intervention among their colleagues. Clinicians' input is therefore an essential ingredient to success. Fourth, organizational trust, a key aspect of a healthy safety culture, is hard to build and easy to erode.[9, 16] If clinicians at an organization lose trust in CDS because of poor design and deployment strategies, they are likely to ignore CDS in the future.[17]
Like tools introduced into medicine such as magnetic resonance imaging and highly active antiretroviral therapy, CDS will need to evolve as the clinical community grapples with its potential and limitations. As EMRs move toward ubiquity in the hospital setting, CDS will become part of the fabric of hospital‐based practice, and the Journal of Hospital Medicine readership would do well to learn about this new tool of the trade.
Disclosure
Disclosure: Nothing to report.
- More than half of US hospitals have at least a basic EHR, but stage 2 criteria remain challenging for most. Health Aff (Millwood). 2014;33(9):1664–1671. , , , et al.
- Clinical Decision Support Systems: State of the Art. AHRQ publication no. 09–0069‐EF. Rockville, MD: Agency for Healthcare Research and Quality; 2009. .
- Clinical practice improvement and redesign: how change in workflow can be supported by clinical decision support. AHRQ Publication No. 09–0054‐EF. Rockville, Maryland: Agency for Healthcare Research and Quality. June 2009. .
- Improving Appropriateness of Acid-Suppressive Medication Use via Computerized Clinical Decision Support. J Hosp Med. 2015;10(1):41–45. , , , , , , .
- Factors Associated With Medication Warning Acceptance for Hospitalized Adults. J Hosp Med. 2015;10(1):19–25. , , , .
- Why providers transfuse blood products outside recommended guidelines in spite of integrated electronic best practice alerts. J Hosp Med. 2015;10(1):1–7. , , , , , .
- Categorizing the unintended sociotechnical consequences of computerized provider order entry. Int J Med Inform. 2007:76(1):S21–S27. , , , , , .
- Unintended Consequences of Information Technologies in Health Care–An Interactive Sociotechnical Analysis. J Am Med Inform Assoc. 2007;15:542–549 , , .
- A new socio‐technical model for studying health information technology in complex adaptive healthcare systems. Quality and Safety in Health Care. 19(Supplement 3): i68–74, October 2010; , and .
- Brigham Young University. Socio‐technical Theory. http://istheory.byu.edu/wiki/Socio‐technical_theory (Last updated 11/15/2011).
- Electronic drug interaction alerts in ambulatory care: the value and acceptance of high-value alerts in US medical practices as assessed by an expert clinical panel. Drug Saf. 2011;34(7):587–93. , , , , , .
- Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006 Jan–Feb;13(1):5–11. Epub 2005 Oct 12. , , , , , , , , .
- Are we heeding the warning signs? Examining providers' overrides of computerized drug‐drug interaction alerts in primary care. PLoS One. 2013 Dec 26;8(12):e85071. doi: 10.1371/journal.pone.0085071. eCollection 2013. , , , , , , .
- Whittington. The triple aim: care, health, and cost. Health Aff. 2008;27:759–769. , .
- Committee on Patient Safety and Health Information Technology; Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: National Academies Press; 2012.
- Explicit and implicit trust within safety culture. Risk Anal. 2006;26(5):1139–1150. , , .
- Clinical decision support systems could be modified to reduce ‘alert fatigue’ while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30:2310–2317. , , , , .
The adoption of electronic health records (EHRs) in US hospitals continues to rise steeply, with nearly 60% of all hospitals having at least a basic EHR as of 2014.[1] EHRs bring with them the ability to inform and guide clinicians as they make decisions. In theory, this form of clinical decision support (CDS) ensures quality of care, reduces adverse events, and improves efficiency; in practice, experience in the field paints a mixed picture.[2, 3] This issue of the Journal of Hospital Medicine presents 3 examples of CDS that illustrate the distance between what we see as CDS' full potential and current limitations.
In the study by Herzig et al.[4] investigators took on the challenge of implementing stress ulcer prophylaxis guidelines developed by the Society of Hospital Medicine. The investigators first demonstrated that targeted electronic prompts captured patients' indications for acid suppressive therapy, and could be used to prohibit prescribers from ordering acid suppressive therapy among patients outside the intensive care unit (ICU) setting. Through an elegant interrupted time series study design deployed across 2 hospital campuses, the investigators were able to demonstrate immediate and clinically significant reduction in acid suppressive therapy outside the ICU. They further found that the impact of this reduction was augmented over time, suggesting that the electronic prompts had a sustained impact on provider ordering behavior. However, below the headlineand relevant to the limitations of CDSthe investigators noted that much of the reduction in the use of acid suppressive therapy for stress ulcer prophylaxis could be accounted for by providers' choice of another acceptable indication (eg, continuing preadmission medication). The authors speculated that the CDS intervention prompted providers to more accurately record the indication for acid suppressive therapy. It is also possible that providers simply chose an alternate indication to circumvent the decision‐support step. Perhaps as a result of these 2 offsetting factors, the actual use of acid suppressive therapy, regardless of indication, only decreased in a modest and statistically nonsignificant way, casting the true effectiveness of this CDS intervention into question.
Two other studies in this issue of the Journal of Hospital Medicine[5, 6] provide valuable insights into interactions between social and technical factors[7, 8, 9, 10] that determine the success or failure in the use of technology such as CDS to drive organizational performance. At the technical end of this sociotechnical spectrum, the study by Knight et al.[5] illustrated that a minimally configured and visually unintuitive medication decision‐support system resulted in a high number of alerts (approximately 17% of studied orders), leading to the well‐reported phenomena of alert fatigue and substantially lower response rate compared to those reported in the literature.[11, 12, 13] Moreover, the analysis suggested that response to these alerts were particularly muted among situations that were particularly high risk, including the patient being older, patient having a greater length of stay, care being delivered in the internal medicine service, resident physician being the prescriber, and the medication being on the Institute for Safe Medication Practices list of high‐alert medications. The investigators concluded that a redesign of the medication decision‐support system was needed.
The study by Chen et al.[6] illuminated how social factors pose challenges in implementing CDS. Investigators in this study were previously successful in using a combination of an education campaign and interruptive decision‐support prompts to reduce the inappropriate ordering of blood transfusions. However, even with a successful intervention, up to 30% of transfusions occurred outside of recommended guidelines. This finding prompted the investigators to analyze the free‐text reasons offered by providers for overriding the recommended guidelines. Two key patterns emerged from their structured analysis. First, many of the apparently inappropriate transfusions occurred under officially sanctioned protocols (such as stem cell transplant) that the computer system was not able to take into account in generating alerts. Second, many orders that reflected questionable practices were being entered by resident physicians, physician assistants, nurse practitioners, and nurses who were least empowered to challenge requests from senior staff.
Several practical and actionable lessons can be drawn from the 3 sets of investigators featured in this issue of the Journal of Hospital Medicine. First, all investigators defined metrics that should be tracked over time to demonstrate progress and to make iterative improvements; this discipline is needed in both academic and community settings to prioritize limited CDS resources in an objective and data‐driven way. Second, as the Herzig et al.[4] article illustrated, when it comes to evaluating the impact of CDS, we cannot be satisfied merely with process measures (eg, change in clinical documentation) at the expense of outcome measures (eg, decrease in inappropriate use of therapies). Third, as Chen et al.[6] recognized, CDS is but a component of an educational program to guide and alter clinical behavior, and must be deployed in conjunction with other educational tools such as newsletters, traditional lectures, or academic detailing. Fourth, clinicians with a stake in improving quality and safety should be on guard against the well‐documented phenomena of alert fatigue by ensuring their organization selects an appropriate framework for deciding which CDS alerts are activated andwhere possibledisplay the highest‐priority alerts in the most prominent and interruptive manner. Fifth, CDS must be maintained over time as clinical guidelines and clinicians' receptivity to each CDS evolve. Alerts that are not changing clinical behavior should either be modified or simply turned off. Sixth, free text entered as part of structured data entry (eg, while placing orders) or as reasons for overriding CDS (as in Chen et al.[6]) offer significant insights on how to optimize CDS, and should be monitored systematically on an ongoing basis to ensure the EMR addresses users' changing needs and mental models.
So what is the clinician with an interest in improving healthcare outcomes and organizational efficiency to do given CDS' limitations? One option is to wait for the science of CDS to further mature and have those advances embedded in the EMR at your organization. Another option might be to rely on the information technology and clinical informatics professionals at your organization to decide how CDS should be used locally. In 2014, these may be untenable choices for the following reasons. First, given the universal pressures to improve healthcare outcomes and contain costs,[14] healthcare organizations must use all available tools to achieve challenging performance goals. Second, as EMRs with CDS become commonplace, and as the 3 articles in this issue of the Journal of Hospital Medicine and others have illustrated, there are many opportunities to misuse or poorly implement CDS, with potentially dire consequences.[15] Third, design and deployment of effective CDS require information technology and informatics professionals to collaborate with clinicians to gauge the quality of EMR data used to drive CDS and clinicians' receptivity to CDS, illuminate the sociotechnical context in which to deploy the CDS, and champion the CDS intervention among their colleagues. Clinicians' input is therefore an essential ingredient to success. Fourth, organizational trust, a key aspect of a healthy safety culture, is hard to build and easy to erode.[9, 16] If clinicians at an organization lose trust in CDS because of poor design and deployment strategies, they are likely to ignore CDS in the future.[17]
Like tools introduced into medicine such as magnetic resonance imaging and highly active antiretroviral therapy, CDS will need to evolve as the clinical community grapples with its potential and limitations. As EMRs move toward ubiquity in the hospital setting, CDS will become part of the fabric of hospital‐based practice, and the Journal of Hospital Medicine readership would do well to learn about this new tool of the trade.
Disclosure
Disclosure: Nothing to report.
The adoption of electronic health records (EHRs) in US hospitals continues to rise steeply, with nearly 60% of all hospitals having at least a basic EHR as of 2014.[1] EHRs bring with them the ability to inform and guide clinicians as they make decisions. In theory, this form of clinical decision support (CDS) ensures quality of care, reduces adverse events, and improves efficiency; in practice, experience in the field paints a mixed picture.[2, 3] This issue of the Journal of Hospital Medicine presents 3 examples of CDS that illustrate the distance between what we see as CDS' full potential and current limitations.
In the study by Herzig et al.[4] investigators took on the challenge of implementing stress ulcer prophylaxis guidelines developed by the Society of Hospital Medicine. The investigators first demonstrated that targeted electronic prompts captured patients' indications for acid suppressive therapy, and could be used to prohibit prescribers from ordering acid suppressive therapy among patients outside the intensive care unit (ICU) setting. Through an elegant interrupted time series study design deployed across 2 hospital campuses, the investigators were able to demonstrate immediate and clinically significant reduction in acid suppressive therapy outside the ICU. They further found that the impact of this reduction was augmented over time, suggesting that the electronic prompts had a sustained impact on provider ordering behavior. However, below the headlineand relevant to the limitations of CDSthe investigators noted that much of the reduction in the use of acid suppressive therapy for stress ulcer prophylaxis could be accounted for by providers' choice of another acceptable indication (eg, continuing preadmission medication). The authors speculated that the CDS intervention prompted providers to more accurately record the indication for acid suppressive therapy. It is also possible that providers simply chose an alternate indication to circumvent the decision‐support step. Perhaps as a result of these 2 offsetting factors, the actual use of acid suppressive therapy, regardless of indication, only decreased in a modest and statistically nonsignificant way, casting the true effectiveness of this CDS intervention into question.
Two other studies in this issue of the Journal of Hospital Medicine[5, 6] provide valuable insights into interactions between social and technical factors[7, 8, 9, 10] that determine the success or failure in the use of technology such as CDS to drive organizational performance. At the technical end of this sociotechnical spectrum, the study by Knight et al.[5] illustrated that a minimally configured and visually unintuitive medication decision‐support system resulted in a high number of alerts (approximately 17% of studied orders), leading to the well‐reported phenomena of alert fatigue and substantially lower response rate compared to those reported in the literature.[11, 12, 13] Moreover, the analysis suggested that response to these alerts were particularly muted among situations that were particularly high risk, including the patient being older, patient having a greater length of stay, care being delivered in the internal medicine service, resident physician being the prescriber, and the medication being on the Institute for Safe Medication Practices list of high‐alert medications. The investigators concluded that a redesign of the medication decision‐support system was needed.
The study by Chen et al.[6] illuminated how social factors pose challenges in implementing CDS. Investigators in this study were previously successful in using a combination of an education campaign and interruptive decision‐support prompts to reduce the inappropriate ordering of blood transfusions. However, even with a successful intervention, up to 30% of transfusions occurred outside of recommended guidelines. This finding prompted the investigators to analyze the free‐text reasons offered by providers for overriding the recommended guidelines. Two key patterns emerged from their structured analysis. First, many of the apparently inappropriate transfusions occurred under officially sanctioned protocols (such as stem cell transplant) that the computer system was not able to take into account in generating alerts. Second, many orders that reflected questionable practices were being entered by resident physicians, physician assistants, nurse practitioners, and nurses who were least empowered to challenge requests from senior staff.
Several practical and actionable lessons can be drawn from the 3 sets of investigators featured in this issue of the Journal of Hospital Medicine. First, all investigators defined metrics that should be tracked over time to demonstrate progress and to make iterative improvements; this discipline is needed in both academic and community settings to prioritize limited CDS resources in an objective and data‐driven way. Second, as the Herzig et al.[4] article illustrated, when it comes to evaluating the impact of CDS, we cannot be satisfied merely with process measures (eg, change in clinical documentation) at the expense of outcome measures (eg, decrease in inappropriate use of therapies). Third, as Chen et al.[6] recognized, CDS is but a component of an educational program to guide and alter clinical behavior, and must be deployed in conjunction with other educational tools such as newsletters, traditional lectures, or academic detailing. Fourth, clinicians with a stake in improving quality and safety should be on guard against the well‐documented phenomena of alert fatigue by ensuring their organization selects an appropriate framework for deciding which CDS alerts are activated andwhere possibledisplay the highest‐priority alerts in the most prominent and interruptive manner. Fifth, CDS must be maintained over time as clinical guidelines and clinicians' receptivity to each CDS evolve. Alerts that are not changing clinical behavior should either be modified or simply turned off. Sixth, free text entered as part of structured data entry (eg, while placing orders) or as reasons for overriding CDS (as in Chen et al.[6]) offer significant insights on how to optimize CDS, and should be monitored systematically on an ongoing basis to ensure the EMR addresses users' changing needs and mental models.
So what is the clinician with an interest in improving healthcare outcomes and organizational efficiency to do given CDS' limitations? One option is to wait for the science of CDS to further mature and have those advances embedded in the EMR at your organization. Another option might be to rely on the information technology and clinical informatics professionals at your organization to decide how CDS should be used locally. In 2014, these may be untenable choices for the following reasons. First, given the universal pressures to improve healthcare outcomes and contain costs,[14] healthcare organizations must use all available tools to achieve challenging performance goals. Second, as EMRs with CDS become commonplace, and as the 3 articles in this issue of the Journal of Hospital Medicine and others have illustrated, there are many opportunities to misuse or poorly implement CDS, with potentially dire consequences.[15] Third, design and deployment of effective CDS require information technology and informatics professionals to collaborate with clinicians to gauge the quality of EMR data used to drive CDS and clinicians' receptivity to CDS, illuminate the sociotechnical context in which to deploy the CDS, and champion the CDS intervention among their colleagues. Clinicians' input is therefore an essential ingredient to success. Fourth, organizational trust, a key aspect of a healthy safety culture, is hard to build and easy to erode.[9, 16] If clinicians at an organization lose trust in CDS because of poor design and deployment strategies, they are likely to ignore CDS in the future.[17]
Like tools introduced into medicine such as magnetic resonance imaging and highly active antiretroviral therapy, CDS will need to evolve as the clinical community grapples with its potential and limitations. As EMRs move toward ubiquity in the hospital setting, CDS will become part of the fabric of hospital‐based practice, and the Journal of Hospital Medicine readership would do well to learn about this new tool of the trade.
Disclosure
Disclosure: Nothing to report.
- More than half of US hospitals have at least a basic EHR, but stage 2 criteria remain challenging for most. Health Aff (Millwood). 2014;33(9):1664–1671. , , , et al.
- Clinical Decision Support Systems: State of the Art. AHRQ publication no. 09–0069‐EF. Rockville, MD: Agency for Healthcare Research and Quality; 2009. .
- Clinical practice improvement and redesign: how change in workflow can be supported by clinical decision support. AHRQ Publication No. 09–0054‐EF. Rockville, Maryland: Agency for Healthcare Research and Quality. June 2009. .
- Improving Appropriateness of Acid-Suppressive Medication Use via Computerized Clinical Decision Support. J Hosp Med. 2015;10(1):41–45. , , , , , , .
- Factors Associated With Medication Warning Acceptance for Hospitalized Adults. J Hosp Med. 2015;10(1):19–25. , , , .
- Why providers transfuse blood products outside recommended guidelines in spite of integrated electronic best practice alerts. J Hosp Med. 2015;10(1):1–7. , , , , , .
- Categorizing the unintended sociotechnical consequences of computerized provider order entry. Int J Med Inform. 2007:76(1):S21–S27. , , , , , .
- Unintended Consequences of Information Technologies in Health Care–An Interactive Sociotechnical Analysis. J Am Med Inform Assoc. 2007;15:542–549 , , .
- A new socio‐technical model for studying health information technology in complex adaptive healthcare systems. Quality and Safety in Health Care. 19(Supplement 3): i68–74, October 2010; , and .
- Brigham Young University. Socio‐technical Theory. http://istheory.byu.edu/wiki/Socio‐technical_theory (Last updated 11/15/2011).
- Electronic drug interaction alerts in ambulatory care: the value and acceptance of high-value alerts in US medical practices as assessed by an expert clinical panel. Drug Saf. 2011;34(7):587–93. , , , , , .
- Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006 Jan–Feb;13(1):5–11. Epub 2005 Oct 12. , , , , , , , , .
- Are we heeding the warning signs? Examining providers' overrides of computerized drug‐drug interaction alerts in primary care. PLoS One. 2013 Dec 26;8(12):e85071. doi: 10.1371/journal.pone.0085071. eCollection 2013. , , , , , , .
- Whittington. The triple aim: care, health, and cost. Health Aff. 2008;27:759–769. , .
- Committee on Patient Safety and Health Information Technology; Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: National Academies Press; 2012.
- Explicit and implicit trust within safety culture. Risk Anal. 2006;26(5):1139–1150. , , .
- Clinical decision support systems could be modified to reduce ‘alert fatigue’ while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30:2310–2317. , , , , .
- More than half of US hospitals have at least a basic EHR, but stage 2 criteria remain challenging for most. Health Aff (Millwood). 2014;33(9):1664–1671. , , , et al.
- Clinical Decision Support Systems: State of the Art. AHRQ publication no. 09–0069‐EF. Rockville, MD: Agency for Healthcare Research and Quality; 2009. .
- Clinical practice improvement and redesign: how change in workflow can be supported by clinical decision support. AHRQ Publication No. 09–0054‐EF. Rockville, Maryland: Agency for Healthcare Research and Quality. June 2009. .
- Improving Appropriateness of Acid-Suppressive Medication Use via Computerized Clinical Decision Support. J Hosp Med. 2015;10(1):41–45. , , , , , , .
- Factors Associated With Medication Warning Acceptance for Hospitalized Adults. J Hosp Med. 2015;10(1):19–25. , , , .
- Why providers transfuse blood products outside recommended guidelines in spite of integrated electronic best practice alerts. J Hosp Med. 2015;10(1):1–7. , , , , , .
- Categorizing the unintended sociotechnical consequences of computerized provider order entry. Int J Med Inform. 2007:76(1):S21–S27. , , , , , .
- Unintended Consequences of Information Technologies in Health Care–An Interactive Sociotechnical Analysis. J Am Med Inform Assoc. 2007;15:542–549 , , .
- A new socio‐technical model for studying health information technology in complex adaptive healthcare systems. Quality and Safety in Health Care. 19(Supplement 3): i68–74, October 2010; , and .
- Brigham Young University. Socio‐technical Theory. http://istheory.byu.edu/wiki/Socio‐technical_theory (Last updated 11/15/2011).
- Electronic drug interaction alerts in ambulatory care: the value and acceptance of high-value alerts in US medical practices as assessed by an expert clinical panel. Drug Saf. 2011;34(7):587–93. , , , , , .
- Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006 Jan–Feb;13(1):5–11. Epub 2005 Oct 12. , , , , , , , , .
- Are we heeding the warning signs? Examining providers' overrides of computerized drug‐drug interaction alerts in primary care. PLoS One. 2013 Dec 26;8(12):e85071. doi: 10.1371/journal.pone.0085071. eCollection 2013. , , , , , , .
- Whittington. The triple aim: care, health, and cost. Health Aff. 2008;27:759–769. , .
- Committee on Patient Safety and Health Information Technology; Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: National Academies Press; 2012.
- Explicit and implicit trust within safety culture. Risk Anal. 2006;26(5):1139–1150. , , .
- Clinical decision support systems could be modified to reduce ‘alert fatigue’ while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30:2310–2317. , , , , .
Effect of an RRT on Resident Perceptions
Rapid response teams (RRTs) have been promoted by patient safety and quality‐improvement organizations as a strategy to reduce preventable in‐hospital deaths.[1] To date, critical analysis of RRTs has focused primarily on their impact on quality‐of‐care metrics.[2, 3, 4] Comparatively few studies have examined the cultural and educational impact of RRTs, particularly at academic medical centers, and those that do exist have focused almost exclusively on perceptions of nurses rather than resident physicians.[5, 6, 7, 8, 9, 10]
Although a prior study found that internal medicine and general surgery residents believed that RRTs improved patient safety, they were largely ambivalent about the RRT's impact on education and training.[11] To date, there has been no focused assessment of resident physician impressions of an RRT across years of training and medical specialty to inform the use of this multidisciplinary team as a component of their residency education.
We sought to determine whether resident physicians at a tertiary care academic medical center perceive educational benefit from collaboration with the RRT and whether they feel that the RRT adversely affects clinical autonomy.
METHODS
The Hospital
Moffitt‐Long Hospital, the tertiary academic medical center of the University of California, San Francisco (UCSF), is a 600‐bed acute care hospital that provides comprehensive critical care services and serves as a major referral center in northern California. There are roughly 5000 admissions to the hospital annually. At the time the study was conducted, there were approximately 200 RRT calls per 1000 adult hospital discharges.
The Rapid Response Team
The RRT is called to assess, triage, and treat patients who have experienced a decline in their clinical status short of a cardiopulmonary arrest. The RRT has been operational at UCSF since June 1, 2007, and is composed of a dedicated critical care nurse and respiratory therapist available 24 hours a day, 7 days a week. The RRT can be activated by any concerned staff member based on vital sign abnormalities, decreased urine output, changes in mental status, or any significant concern about the trajectory of the patient's clinical course.
When the RRT is called on a given patient, the patient's primary physician (at our institution, a resident) is also called to the bedside and works alongside the RRT to address the patient's acute clinical needs. The primary physician, bedside nurse, and RRT discuss the plan of care for the patient, including clinical evaluation, management, and the need for additional monitoring or a transition to a higher level of care. Residents at our institution receive no formal instruction regarding the role of the RRT or curriculum on interfacing with the RRT, and they do not serve as members of the RRT as part of a clinical rotation.
The Survey Process
Study subjects were asked via e‐mail to participate in a brief online survey. Subjects were offered the opportunity to win a $100 gift certificate in return for their participation. Weekly e‐mail reminders were sent for a period of 3 months or until a given subject had completed the survey. The survey was administered over a 3‐month period, from March through May, to allow time for residents to work with the RRT during the academic year. The Committee on Human Research at the University of California San Francisco Medical Center approved the study.
Target Population
All residents in specialties that involved direct patient care and the potential to use the adult RRT were included in the study. This included residents in the fields of internal medicine, neurology, general surgery, orthopedic surgery, neurosurgery, plastic surgery, urology, and otolaryngology (Table 1). Residents in pediatrics and obstetrics and gynecology were excluded, as emergencies in their patients are addressed by a pediatric RRT and an obstetric anesthesiologist, respectively. Residents in anesthesiology were excluded as they do not care for nonintensive care unit (ICU) patients as part of the primary team and are not involved in RRT encounters.
Demographic | No. (%) |
---|---|
| |
Medical specialty | |
Internal medicine | 145 (61.4) |
Neurology | 18 (7.6) |
General surgery | 31 (13.1) |
Orthopedic surgery | 17 (7.2) |
Neurosurgery | 4 (1.7) |
Plastic surgery | 2 (0.8) |
Urology | 9 (3.8) |
Otolaryngology | 10 (4.2) |
Years of postgraduate training | Average 2.34 (SD 1.41) |
1 | 83 (35.2) |
2 | 60 (25.4) |
3 | 55 (23.3) |
4 | 20 (8.5) |
5 | 8 (3.4) |
6 | 5 (2.1) |
7 | 5 (2.1) |
Gender | |
Male | 133 (56.4) |
Female | 102 (43.2) |
Had exposure to RRT during training | |
Yes | 106 (44.9) |
No | 127 (53.8) |
Had previously initiated a call to the RRT | |
Yes | 106 (44.9) |
No | 128 (54.2) |
Survey Design
The resident survey contained 20 RRT‐related items and 7 demographic and practice items. Responses for RRT‐related questions utilized a 5‐point Likert scale ranging from strongly disagree to strongly agree. The survey was piloted prior to administration to check comprehension and interpretation by physicians with experience in survey writing (for the full survey, see Supporting Information, Appendix, in the online version of this article).
Survey Objectives
The survey was designed to capture the experiences of residents who had cared for a patient for whom the RRT had been activated. Data collected included residents' perceptions of the impact of the RRT on their residency education and clinical autonomy, the quality of care provided, patient safety, and hospital‐wide culture. Potential barriers to use of the RRT were also examined.
Outcomes
The study's primary outcomes included the perceived educational benefit of the RRT and its perceived impact on clinical autonomy. Secondary outcomes included the effect of years of training and resident specialty on both the perceived educational benefit and impact on clinical autonomy among our study group.
Statistical Analysis
Responses to each survey item were described for each specialty, and subgroup analysis was conducted. For years of training, that item was dichotomized into either 1 year (henceforth referred to as interns) or greater than 1 year (henceforth referred to as upper‐level residents). Resident specialty was dichotomized into medical fields (internal medicine and neurology) or surgical fields. For statistical analysis, agreement statements were collapsed to either disagree (strongly disagree/disagree), neutral, or agree (strongly agree/agree). The influence of years of resident training and resident specialty was assessed for all items in the survey using 2 or Fisher exact tests as appropriate for the 3 agreement categories. Analysis was conducted using SPSS 21.0 (IBM Corp., Armonk, NY).
RESULTS
There were 246 responses to the survey of a possible 342, yielding a response rate of 72% (Table 2). Ten respondents stated that they had never cared for a patient where the RRT had been activated. Given their lack of exposure to the RRT, these respondents were excluded from the analysis, yielding a final sample size of 236. The demographic and clinical practice characteristics of respondents are shown in Table 1.
The resident | Strongly Disagree/Disagree, n (%) | Neutral, n (%) | Agree/ Strongly Agree, n (%) |
---|---|---|---|
| |||
Is comfortable managing the unstable patient without the RRT | 104 (44.1) | 64 (27.1) | 66 (28.0) |
And RRT work together to make treatment decisions | 10 (4.2) | 13 (5.5) | 208 (88.1) |
Believes there are fewer opportunities to care for unstable floor patients due to the RRT | 188 (79.7) | 26 (11.0) | 17 (7.2) |
Feels less prepared to care for unstable patients due to the RRT | 201 (85.2) | 22 (9.3) | 13 (5.5) |
Feels that working with the RRT creates a valuable educational experience | 9 (3.8) | 39 (16.5) | 184 (78.0) |
Feels that nurses caring for the unstable patient should always contact them prior to contacting the RRT | 123 (52.1) | 33 (14.0) | 76 (32.2) |
Would be unhappy with nurses calling RRT prior to contacting them | 141 (59.7) | 44 (18.6) | 51 (21.6) |
Perceives that the presence of RRT decreases residents' autonomy | 179 (75.8) | 25 (10.6) | 28 (11.9) |
Demographics and Primary Outcomes
Interns comprised 83 (35%) of the respondents; the average time in postgraduate training was 2.34 years (standard deviation=1.41). Of respondents, 163 (69%) were in medical fields, and 73 (31%) were in surgical fields. Overall responses to the survey are shown in Table 2, and subgroup analysis is shown in Table 3.
The resident | 1 Year, n=83, n (%) | >1 Year, n=153, n (%) | P Value | Medical, n=163, n (%) | Surgical, n=73, n (%) | P Value |
---|---|---|---|---|---|---|
| ||||||
Is comfortable managing the unstable patient without the RRT | 0.01 | <0.01 | ||||
Strongly disagree/disagree | 39 (47.6) | 65 (42.8) | 67 (41.6) | 37 (50.7) | ||
Neutral | 29 (35.4) | 35 (23.0) | 56 (34.8) | 8 (11.0) | ||
Agree/strongly agree | 14 (17.1) | 52 (34.2) | 38 (23.6) | 28 (38.4) | ||
And RRT work together to make treatment decisions | 0.61 | 0.04 | ||||
Strongly disagree/disagree | 2 (2.4) | 8 (5.4) | 4 (2.5) | 6 (8.7) | ||
Neutral | 5 (6.1) | 8 (5.4) | 7 (4.3) | 6 (8.7) | ||
Agree/strongly agree | 75 (91.5) | 137 (89.3) | 151 (93.2) | 57 (82.6) | ||
Believes there are fewer opportunities to care for unstable floor patients due to the RRT | 0.05 | 0.04 | ||||
Strongly disagree/disagree | 59 (72.8) | 129 (86.0) | 136 (85.5) | 52 (72.2) | ||
Neutral | 13 (16.0) | 13 (8.7) | 15 (9.4) | 11 (15.3) | ||
Agree/strongly agree | 9 (11.1) | 8 (5.3) | 8 (5.0) | 9 (12.5) | ||
Feels less prepared to care for unstable patients due to the RRT | <0.01 | 0.79 | ||||
Strongly disagree/disagree | 62 (74.7) | 139 (90.8) | 140 (85.9) | 61 (83.6) | ||
Neutral | 14 (16.9) | 8 (5.2) | 15 (9.2) | 7 (9.6) | ||
Agree/Strongly agree | 7 (8.4) | 6 (3.9) | 8 (4.9) | 5 (6.8) | ||
Feels working with the RRT is a valuable educational experience | 0.61 | 0.01 | ||||
Strongly disagree/disagree | 2 (2.4) | 7 (4.7) | 2 (1.2) | 7 (9.9) | ||
Neutral | 12 (14.6) | 27 (18.0) | 25 (15.5) | 14 (19.7) | ||
Agree/strongly agree | 68 (82.9) | 116 (77.3) | 134 (83.2) | 50 (70.4) | ||
Feels nurses caring for unstable patients should always contact the resident prior to contacting the RRT | 0.49 | <0.01 | ||||
Strongly disagree/disagree | 47 (57.3) | 76 (50.7) | 97 (60.2) | 26 (36.6) | ||
Neutral | 9 (11.0) | 24 (16.0) | 26 (16.1) | 7 (9.9) | ||
Agree/strongly agree | 26 (31.7) | 50 (33.3) | 38 (23.6) | 38 (53.5) | ||
Would be unhappy with nurses calling RRT prior to contacting them | 0.81 | <0.01 | ||||
Strongly disagree/disagree | 51 (61.4) | 90 (58.8) | 109 (66.9) | 32 (43.8) | ||
Neutral | 16 (19.3) | 28 (18.3) | 30 (18.4) | 14 (19.2) | ||
Agree/strongly agree | 16 (19.3) | 35 (22.9) | 24 (14.7) | 27 (37.0) | ||
Perceives that the presence of the RRT decreases autonomy as a physician | 0.95 | 0.18 | ||||
Strongly disagree/disagree | 63 (77.8) | 116 (76.8) | 127 (79.9) | 52 (71.2) | ||
Neutral | 9 (11.1) | 16 (10.6) | 17 (10.7) | 8 (11.0) | ||
Agree/strongly agree | 9 (11.1) | 19 (12.6) | 15 (9.4) | 13 (17.8) |
Effect of the RRT on Resident Education
Of all residents, 66 (28%) agreed that they felt comfortable managing an unstable patient without the assistance of the RRT. Surgical residents felt more comfortable managing an unstable patient alone (38%) compared medical residents (24%) (P<0.01). Interns felt less comfortable caring for unstable patients without the RRT's assistance (17%) compared with upper‐level residents (34%) (P=0.01).
Residents overall disagreed with the statement that the RRT left them feeling less prepared to care for unstable patients (n=201; 85%). More upper‐level residents disagreed with this assertion (91%) compared with interns (75%) (P<0.01). Responses to this question did not differ significantly between medical and surgical residents.
Upper‐level residents were more likely to disagree with the statement that the RRT resulted in fewer opportunities to care for unstable patients (n=129; 86%) compared with interns (n=59; 73%) (P=0.05). Medical residents were also more likely to disagree with this statement (n=136; 86%) compared with surgical residents (n=52; 72%) (P=0.04).
With respect to residents' overall impressions of the educational value of the RRT, 68 (83%) interns and 116 (77%) upper‐level residents agreed that it provided a valuable educational experience (P=0.61). Medical and surgical residents differed in this regard, with 134 (83%) medical residents and 50 (70%) surgical residents agreeing that the RRT provided a valuable educational experience (P=0.01).
Effect of the RRT on Clinical Autonomy
Of all residents, 123 (52%) disagreed that the bedside nurse should always contact the primary resident prior to calling the RRT; 76 (32%) agreed with this statement. Medicine residents were more likely to disagree with this approach (n=97; 60%) than were surgical residents (n=26; 36%) (P<0.01). There was no difference between interns and upper‐level residents in response to this question. Most of those who disagreed with this statement were medical residents, whereas most surgical residents (n=38; 54%) agreed that they should be contacted first (P<0.01).
There were no differences between interns and upper‐level residents with respect to perceptions of the RRT's impact on clinical autonomy: 11% of interns and 13% of residents agreed that the RRT decreased their clinical autonomy as a physician. There was no significant difference between medical and surgical residents' responses to this question.
The majority of residents (n=208; 88%) agreed that they and the RRT work together to make treatment decisions for patients. This was true regardless of year of training (P=0.61), but it was expressed more often among medical residents than surgical residents (n=151, 93% vs n=57, 83%; P=0.04).
DISCUSSION
Most studies examining the educational and cultural impact of RRTs exist in the nursing literature. These studies demonstrate that medical and surgical nurses are often reluctant to call the RRT for fear of criticism by the patient's physician.[5, 8, 9, 10, 11, 12, 13] In contrast, our data demonstrate that resident physicians across all levels of training and specialties have a positive view of the RRT and its role in patient care. The data support our hypothesis that although most residents perceive educational benefit from their interactions with the RRT, this perception is greater for less‐experienced residents and for those residents who routinely provide care for critically ill patients and serve as code team leaders. In addition, a minority of residents, irrespective of years of training or medical specialty, felt that the RRT negatively impacted their clinical autonomy.
Our data have several important implications. First, although over half of the residents surveyed had not been exposed to RRTs during medical school, and despite having no formal training on the role of the RRT during residency, most residents identified their interactions with the RRT as potential learning opportunities. This finding differs from that of Benin and colleagues, who suggested that RRTs might negatively impact residents' educational development and decrease opportunities for high‐stakes clinical reasoning by allowing the clinical decision‐making process to be driven by the RRT staff rather than the resident.[5] One possible explanation for this discrepancy is the variable makeup of the RRT at different institutions. At our medical center, the RRT is comprised of a critical care nurse and respiratory therapist, whereas at other institutions, the RRT may be led by a resident, fellow, attending hospitalist, or intensivist, any of whom might supersede the primary resident once the RRT is engaged.
In our study, the perceived educational benefit of the RRT was most pronounced with interns. Interns likely derive incrementally greater benefit from each encounter with an acutely decompensating patient than do senior residents, whether the RRT is present or not. Observing the actions of seasoned nurses and respiratory therapists may demonstrate new tools for interns to use in their management of such situations; for example, the RRT may suggest different modes of oxygen delivery or new diagnostic tests. The RRT also likely helps interns navigate the hospital system by assisting with decisions around escalation of care and serving as a liaison to ICU staff.
Our data also have implications for resident perceptions of clinical autonomy. Interns, far less experienced caring for unstable patients than upper‐level residents, expressed more concern about the RRT stripping them of opportunities to do so and about feeling less prepared to handle clinically deteriorating patients. Part of this perception may be due to interns feeling less comfortable taking charge of a patient's care in the presence of an experienced critical care nurse and respiratory therapist, both for reasons related to clinical experience and to a cultural hierarchy that often places the intern at the bottom of the authority spectrum. In addition, when the RRT is called on an intern's patient, the senior resident may accompany the intern to the bedside and guide the intern on his or her approach to the situation; in some cases, the senior resident may take charge, leaving the intern feeling less autonomous.
If training sessions could be developed to address not only clinical decision making, but also multidisciplinary team interactions and roles in the acute care setting, this may mitigate interns' concerns. Such curricula could also enhance residents' experience in interprofessional care, an aspect of clinical training that has become increasingly important in the age of limited duty hours and higher volume, and higher acuity inpatient censuses. An RRT model, like a code blue model, could be used in simulation‐based training to increase both comfort with use of the RRT and efficiency of the RRTresidentnurse team. Although our study did not address specifically residents' perceptions of multidisciplinary teams, this could be a promising area for further study.
For surgical residents, additional factors are likely at play. Surgical residents spend significant time in the operating room, reducing time present at the bedside and hindering the ability to respond swiftly when an RRT is called on their patient. This could cause surgical residents to feel less involved in the care of that patientsupported by our finding that fewer surgical residents felt able to collaborate with the RRTand also to derive less educational benefit and clinical satisfaction from the experience. Differences between medical and surgical postgraduate training also likely play a role, manifest by varying clinical roles and duration of training, and as such it may not be appropriate to draw direct comparisons between respective postgraduate year levels. In addition, differences in patients' medical complexity, varying allegiance to the traditional hierarchy of medical providers, and degree of familiarity with the RRT itself may impact surgical residents' comfort with the RRT.
Limitations of our study include that it was conducted at a single site and addressed a specific population of residents at our tertiary academic center. Though we achieved an excellent response rate, our subspecialty sample sizes were too small to allow for individual comparisons among those groups. Conducting a larger study at multiple institutions where the makeup of the RRT differs could provide further insight into how different clinical environments and different RRT models impact resident perceptions. Finally, we allowed each respondent to interpret both educational benefit and clinical autonomy in the context of their own level of training and clinical practice rather than providing strict definitions of these terms. There is no standardized definition of autonomy in the context of resident clinical practice, and we did not measure direct educational outcomes. Our study design therefore allowed only for measurement of perceptions of these concepts. Measurement of actual educational value of the RRTfor example, through direct clinical observation or by incorporating the RRT experience into an entrustable professional activitywould provide more quantitative evidence of the RRT's utility for our resident population. Future study in this area would help to support the development and ongoing assessment of RRT‐based curricula moving forward.
CONCLUSION
Our data show that resident physicians have a strongly favorable opinion of the RRT at our institution. Future studies should aim to quantify the educational benefit of RRTs for residents and identify areas for curricular development to enhance resident education as RRTs become more pervasive.
- Institute for Healthcare Improvement. Rapid response teams. Available at: http://www.ihi.org/topics/rapidresponseteams. Accessed May 5, 2014.
- Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):18–26. , , , , .
- Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):2463–2478. , , , et al.
- Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):1238–1243. , , , , , .
- Defining impact of a rapid response team: qualitative study with nurses, physicians and hospital administrators. BMJ Qual Saf. 2012;21(5):391–398. , , , , .
- How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):e13. , , .
- Rapid response team approach to staff satisfaction. Orthop Nurs. 2008;27(5):266–271; quiz 272–273. , , , .
- Voices from the floor: nurses' perceptions of the medical emergency team. Intensive Crit Care Nurs. 2006;22(3):138–143. , , , .
- Rapid response teams seen through the eyes of the nurse. Am J Nurs. 2010;110(6):28–34; quiz 35–36. , , .
- What stops hospital clinical staff from following protocols? An analysis of the incidence and factors behind the failure of bedside clinical staff to activate the rapid response system in a multi‐campus Australian metropolitan healthcare service. BMJ Qual Saf. 2012;21(7):569–575. , , , et al.
- Resident and RN perceptions of the impact of a medical emergency team on education and patient safety in an academic medical center. Crit Care Med. 2009;37(12):3091–3096. , , , et al.
- Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39. , , , et al.
- Timing and teamwork–an observational pilot study of patients referred to a rapid response team with the aim of identifying factors amenable to re‐design of a rapid response system. Resuscitation. 2012;83(6):782–787. , , , .
Rapid response teams (RRTs) have been promoted by patient safety and quality‐improvement organizations as a strategy to reduce preventable in‐hospital deaths.[1] To date, critical analysis of RRTs has focused primarily on their impact on quality‐of‐care metrics.[2, 3, 4] Comparatively few studies have examined the cultural and educational impact of RRTs, particularly at academic medical centers, and those that do exist have focused almost exclusively on perceptions of nurses rather than resident physicians.[5, 6, 7, 8, 9, 10]
Although a prior study found that internal medicine and general surgery residents believed that RRTs improved patient safety, they were largely ambivalent about the RRT's impact on education and training.[11] To date, there has been no focused assessment of resident physician impressions of an RRT across years of training and medical specialty to inform the use of this multidisciplinary team as a component of their residency education.
We sought to determine whether resident physicians at a tertiary care academic medical center perceive educational benefit from collaboration with the RRT and whether they feel that the RRT adversely affects clinical autonomy.
METHODS
The Hospital
Moffitt‐Long Hospital, the tertiary academic medical center of the University of California, San Francisco (UCSF), is a 600‐bed acute care hospital that provides comprehensive critical care services and serves as a major referral center in northern California. There are roughly 5000 admissions to the hospital annually. At the time the study was conducted, there were approximately 200 RRT calls per 1000 adult hospital discharges.
The Rapid Response Team
The RRT is called to assess, triage, and treat patients who have experienced a decline in their clinical status short of a cardiopulmonary arrest. The RRT has been operational at UCSF since June 1, 2007, and is composed of a dedicated critical care nurse and respiratory therapist available 24 hours a day, 7 days a week. The RRT can be activated by any concerned staff member based on vital sign abnormalities, decreased urine output, changes in mental status, or any significant concern about the trajectory of the patient's clinical course.
When the RRT is called on a given patient, the patient's primary physician (at our institution, a resident) is also called to the bedside and works alongside the RRT to address the patient's acute clinical needs. The primary physician, bedside nurse, and RRT discuss the plan of care for the patient, including clinical evaluation, management, and the need for additional monitoring or a transition to a higher level of care. Residents at our institution receive no formal instruction regarding the role of the RRT or curriculum on interfacing with the RRT, and they do not serve as members of the RRT as part of a clinical rotation.
The Survey Process
Study subjects were asked via e‐mail to participate in a brief online survey. Subjects were offered the opportunity to win a $100 gift certificate in return for their participation. Weekly e‐mail reminders were sent for a period of 3 months or until a given subject had completed the survey. The survey was administered over a 3‐month period, from March through May, to allow time for residents to work with the RRT during the academic year. The Committee on Human Research at the University of California San Francisco Medical Center approved the study.
Target Population
All residents in specialties that involved direct patient care and the potential to use the adult RRT were included in the study. This included residents in the fields of internal medicine, neurology, general surgery, orthopedic surgery, neurosurgery, plastic surgery, urology, and otolaryngology (Table 1). Residents in pediatrics and obstetrics and gynecology were excluded, as emergencies in their patients are addressed by a pediatric RRT and an obstetric anesthesiologist, respectively. Residents in anesthesiology were excluded as they do not care for nonintensive care unit (ICU) patients as part of the primary team and are not involved in RRT encounters.
Demographic | No. (%) |
---|---|
| |
Medical specialty | |
Internal medicine | 145 (61.4) |
Neurology | 18 (7.6) |
General surgery | 31 (13.1) |
Orthopedic surgery | 17 (7.2) |
Neurosurgery | 4 (1.7) |
Plastic surgery | 2 (0.8) |
Urology | 9 (3.8) |
Otolaryngology | 10 (4.2) |
Years of postgraduate training | Average 2.34 (SD 1.41) |
1 | 83 (35.2) |
2 | 60 (25.4) |
3 | 55 (23.3) |
4 | 20 (8.5) |
5 | 8 (3.4) |
6 | 5 (2.1) |
7 | 5 (2.1) |
Gender | |
Male | 133 (56.4) |
Female | 102 (43.2) |
Had exposure to RRT during training | |
Yes | 106 (44.9) |
No | 127 (53.8) |
Had previously initiated a call to the RRT | |
Yes | 106 (44.9) |
No | 128 (54.2) |
Survey Design
The resident survey contained 20 RRT‐related items and 7 demographic and practice items. Responses for RRT‐related questions utilized a 5‐point Likert scale ranging from strongly disagree to strongly agree. The survey was piloted prior to administration to check comprehension and interpretation by physicians with experience in survey writing (for the full survey, see Supporting Information, Appendix, in the online version of this article).
Survey Objectives
The survey was designed to capture the experiences of residents who had cared for a patient for whom the RRT had been activated. Data collected included residents' perceptions of the impact of the RRT on their residency education and clinical autonomy, the quality of care provided, patient safety, and hospital‐wide culture. Potential barriers to use of the RRT were also examined.
Outcomes
The study's primary outcomes included the perceived educational benefit of the RRT and its perceived impact on clinical autonomy. Secondary outcomes included the effect of years of training and resident specialty on both the perceived educational benefit and impact on clinical autonomy among our study group.
Statistical Analysis
Responses to each survey item were described for each specialty, and subgroup analysis was conducted. For years of training, that item was dichotomized into either 1 year (henceforth referred to as interns) or greater than 1 year (henceforth referred to as upper‐level residents). Resident specialty was dichotomized into medical fields (internal medicine and neurology) or surgical fields. For statistical analysis, agreement statements were collapsed to either disagree (strongly disagree/disagree), neutral, or agree (strongly agree/agree). The influence of years of resident training and resident specialty was assessed for all items in the survey using 2 or Fisher exact tests as appropriate for the 3 agreement categories. Analysis was conducted using SPSS 21.0 (IBM Corp., Armonk, NY).
RESULTS
There were 246 responses to the survey of a possible 342, yielding a response rate of 72% (Table 2). Ten respondents stated that they had never cared for a patient where the RRT had been activated. Given their lack of exposure to the RRT, these respondents were excluded from the analysis, yielding a final sample size of 236. The demographic and clinical practice characteristics of respondents are shown in Table 1.
The resident | Strongly Disagree/Disagree, n (%) | Neutral, n (%) | Agree/ Strongly Agree, n (%) |
---|---|---|---|
| |||
Is comfortable managing the unstable patient without the RRT | 104 (44.1) | 64 (27.1) | 66 (28.0) |
And RRT work together to make treatment decisions | 10 (4.2) | 13 (5.5) | 208 (88.1) |
Believes there are fewer opportunities to care for unstable floor patients due to the RRT | 188 (79.7) | 26 (11.0) | 17 (7.2) |
Feels less prepared to care for unstable patients due to the RRT | 201 (85.2) | 22 (9.3) | 13 (5.5) |
Feels that working with the RRT creates a valuable educational experience | 9 (3.8) | 39 (16.5) | 184 (78.0) |
Feels that nurses caring for the unstable patient should always contact them prior to contacting the RRT | 123 (52.1) | 33 (14.0) | 76 (32.2) |
Would be unhappy with nurses calling RRT prior to contacting them | 141 (59.7) | 44 (18.6) | 51 (21.6) |
Perceives that the presence of RRT decreases residents' autonomy | 179 (75.8) | 25 (10.6) | 28 (11.9) |
Demographics and Primary Outcomes
Interns comprised 83 (35%) of the respondents; the average time in postgraduate training was 2.34 years (standard deviation=1.41). Of respondents, 163 (69%) were in medical fields, and 73 (31%) were in surgical fields. Overall responses to the survey are shown in Table 2, and subgroup analysis is shown in Table 3.
The resident | 1 Year, n=83, n (%) | >1 Year, n=153, n (%) | P Value | Medical, n=163, n (%) | Surgical, n=73, n (%) | P Value |
---|---|---|---|---|---|---|
| ||||||
Is comfortable managing the unstable patient without the RRT | 0.01 | <0.01 | ||||
Strongly disagree/disagree | 39 (47.6) | 65 (42.8) | 67 (41.6) | 37 (50.7) | ||
Neutral | 29 (35.4) | 35 (23.0) | 56 (34.8) | 8 (11.0) | ||
Agree/strongly agree | 14 (17.1) | 52 (34.2) | 38 (23.6) | 28 (38.4) | ||
And RRT work together to make treatment decisions | 0.61 | 0.04 | ||||
Strongly disagree/disagree | 2 (2.4) | 8 (5.4) | 4 (2.5) | 6 (8.7) | ||
Neutral | 5 (6.1) | 8 (5.4) | 7 (4.3) | 6 (8.7) | ||
Agree/strongly agree | 75 (91.5) | 137 (89.3) | 151 (93.2) | 57 (82.6) | ||
Believes there are fewer opportunities to care for unstable floor patients due to the RRT | 0.05 | 0.04 | ||||
Strongly disagree/disagree | 59 (72.8) | 129 (86.0) | 136 (85.5) | 52 (72.2) | ||
Neutral | 13 (16.0) | 13 (8.7) | 15 (9.4) | 11 (15.3) | ||
Agree/strongly agree | 9 (11.1) | 8 (5.3) | 8 (5.0) | 9 (12.5) | ||
Feels less prepared to care for unstable patients due to the RRT | <0.01 | 0.79 | ||||
Strongly disagree/disagree | 62 (74.7) | 139 (90.8) | 140 (85.9) | 61 (83.6) | ||
Neutral | 14 (16.9) | 8 (5.2) | 15 (9.2) | 7 (9.6) | ||
Agree/Strongly agree | 7 (8.4) | 6 (3.9) | 8 (4.9) | 5 (6.8) | ||
Feels working with the RRT is a valuable educational experience | 0.61 | 0.01 | ||||
Strongly disagree/disagree | 2 (2.4) | 7 (4.7) | 2 (1.2) | 7 (9.9) | ||
Neutral | 12 (14.6) | 27 (18.0) | 25 (15.5) | 14 (19.7) | ||
Agree/strongly agree | 68 (82.9) | 116 (77.3) | 134 (83.2) | 50 (70.4) | ||
Feels nurses caring for unstable patients should always contact the resident prior to contacting the RRT | 0.49 | <0.01 | ||||
Strongly disagree/disagree | 47 (57.3) | 76 (50.7) | 97 (60.2) | 26 (36.6) | ||
Neutral | 9 (11.0) | 24 (16.0) | 26 (16.1) | 7 (9.9) | ||
Agree/strongly agree | 26 (31.7) | 50 (33.3) | 38 (23.6) | 38 (53.5) | ||
Would be unhappy with nurses calling RRT prior to contacting them | 0.81 | <0.01 | ||||
Strongly disagree/disagree | 51 (61.4) | 90 (58.8) | 109 (66.9) | 32 (43.8) | ||
Neutral | 16 (19.3) | 28 (18.3) | 30 (18.4) | 14 (19.2) | ||
Agree/strongly agree | 16 (19.3) | 35 (22.9) | 24 (14.7) | 27 (37.0) | ||
Perceives that the presence of the RRT decreases autonomy as a physician | 0.95 | 0.18 | ||||
Strongly disagree/disagree | 63 (77.8) | 116 (76.8) | 127 (79.9) | 52 (71.2) | ||
Neutral | 9 (11.1) | 16 (10.6) | 17 (10.7) | 8 (11.0) | ||
Agree/strongly agree | 9 (11.1) | 19 (12.6) | 15 (9.4) | 13 (17.8) |
Effect of the RRT on Resident Education
Of all residents, 66 (28%) agreed that they felt comfortable managing an unstable patient without the assistance of the RRT. Surgical residents felt more comfortable managing an unstable patient alone (38%) compared medical residents (24%) (P<0.01). Interns felt less comfortable caring for unstable patients without the RRT's assistance (17%) compared with upper‐level residents (34%) (P=0.01).
Residents overall disagreed with the statement that the RRT left them feeling less prepared to care for unstable patients (n=201; 85%). More upper‐level residents disagreed with this assertion (91%) compared with interns (75%) (P<0.01). Responses to this question did not differ significantly between medical and surgical residents.
Upper‐level residents were more likely to disagree with the statement that the RRT resulted in fewer opportunities to care for unstable patients (n=129; 86%) compared with interns (n=59; 73%) (P=0.05). Medical residents were also more likely to disagree with this statement (n=136; 86%) compared with surgical residents (n=52; 72%) (P=0.04).
With respect to residents' overall impressions of the educational value of the RRT, 68 (83%) interns and 116 (77%) upper‐level residents agreed that it provided a valuable educational experience (P=0.61). Medical and surgical residents differed in this regard, with 134 (83%) medical residents and 50 (70%) surgical residents agreeing that the RRT provided a valuable educational experience (P=0.01).
Effect of the RRT on Clinical Autonomy
Of all residents, 123 (52%) disagreed that the bedside nurse should always contact the primary resident prior to calling the RRT; 76 (32%) agreed with this statement. Medicine residents were more likely to disagree with this approach (n=97; 60%) than were surgical residents (n=26; 36%) (P<0.01). There was no difference between interns and upper‐level residents in response to this question. Most of those who disagreed with this statement were medical residents, whereas most surgical residents (n=38; 54%) agreed that they should be contacted first (P<0.01).
There were no differences between interns and upper‐level residents with respect to perceptions of the RRT's impact on clinical autonomy: 11% of interns and 13% of residents agreed that the RRT decreased their clinical autonomy as a physician. There was no significant difference between medical and surgical residents' responses to this question.
The majority of residents (n=208; 88%) agreed that they and the RRT work together to make treatment decisions for patients. This was true regardless of year of training (P=0.61), but it was expressed more often among medical residents than surgical residents (n=151, 93% vs n=57, 83%; P=0.04).
DISCUSSION
Most studies examining the educational and cultural impact of RRTs exist in the nursing literature. These studies demonstrate that medical and surgical nurses are often reluctant to call the RRT for fear of criticism by the patient's physician.[5, 8, 9, 10, 11, 12, 13] In contrast, our data demonstrate that resident physicians across all levels of training and specialties have a positive view of the RRT and its role in patient care. The data support our hypothesis that although most residents perceive educational benefit from their interactions with the RRT, this perception is greater for less‐experienced residents and for those residents who routinely provide care for critically ill patients and serve as code team leaders. In addition, a minority of residents, irrespective of years of training or medical specialty, felt that the RRT negatively impacted their clinical autonomy.
Our data have several important implications. First, although over half of the residents surveyed had not been exposed to RRTs during medical school, and despite having no formal training on the role of the RRT during residency, most residents identified their interactions with the RRT as potential learning opportunities. This finding differs from that of Benin and colleagues, who suggested that RRTs might negatively impact residents' educational development and decrease opportunities for high‐stakes clinical reasoning by allowing the clinical decision‐making process to be driven by the RRT staff rather than the resident.[5] One possible explanation for this discrepancy is the variable makeup of the RRT at different institutions. At our medical center, the RRT is comprised of a critical care nurse and respiratory therapist, whereas at other institutions, the RRT may be led by a resident, fellow, attending hospitalist, or intensivist, any of whom might supersede the primary resident once the RRT is engaged.
In our study, the perceived educational benefit of the RRT was most pronounced with interns. Interns likely derive incrementally greater benefit from each encounter with an acutely decompensating patient than do senior residents, whether the RRT is present or not. Observing the actions of seasoned nurses and respiratory therapists may demonstrate new tools for interns to use in their management of such situations; for example, the RRT may suggest different modes of oxygen delivery or new diagnostic tests. The RRT also likely helps interns navigate the hospital system by assisting with decisions around escalation of care and serving as a liaison to ICU staff.
Our data also have implications for resident perceptions of clinical autonomy. Interns, far less experienced caring for unstable patients than upper‐level residents, expressed more concern about the RRT stripping them of opportunities to do so and about feeling less prepared to handle clinically deteriorating patients. Part of this perception may be due to interns feeling less comfortable taking charge of a patient's care in the presence of an experienced critical care nurse and respiratory therapist, both for reasons related to clinical experience and to a cultural hierarchy that often places the intern at the bottom of the authority spectrum. In addition, when the RRT is called on an intern's patient, the senior resident may accompany the intern to the bedside and guide the intern on his or her approach to the situation; in some cases, the senior resident may take charge, leaving the intern feeling less autonomous.
If training sessions could be developed to address not only clinical decision making, but also multidisciplinary team interactions and roles in the acute care setting, this may mitigate interns' concerns. Such curricula could also enhance residents' experience in interprofessional care, an aspect of clinical training that has become increasingly important in the age of limited duty hours and higher volume, and higher acuity inpatient censuses. An RRT model, like a code blue model, could be used in simulation‐based training to increase both comfort with use of the RRT and efficiency of the RRTresidentnurse team. Although our study did not address specifically residents' perceptions of multidisciplinary teams, this could be a promising area for further study.
For surgical residents, additional factors are likely at play. Surgical residents spend significant time in the operating room, reducing time present at the bedside and hindering the ability to respond swiftly when an RRT is called on their patient. This could cause surgical residents to feel less involved in the care of that patientsupported by our finding that fewer surgical residents felt able to collaborate with the RRTand also to derive less educational benefit and clinical satisfaction from the experience. Differences between medical and surgical postgraduate training also likely play a role, manifest by varying clinical roles and duration of training, and as such it may not be appropriate to draw direct comparisons between respective postgraduate year levels. In addition, differences in patients' medical complexity, varying allegiance to the traditional hierarchy of medical providers, and degree of familiarity with the RRT itself may impact surgical residents' comfort with the RRT.
Limitations of our study include that it was conducted at a single site and addressed a specific population of residents at our tertiary academic center. Though we achieved an excellent response rate, our subspecialty sample sizes were too small to allow for individual comparisons among those groups. Conducting a larger study at multiple institutions where the makeup of the RRT differs could provide further insight into how different clinical environments and different RRT models impact resident perceptions. Finally, we allowed each respondent to interpret both educational benefit and clinical autonomy in the context of their own level of training and clinical practice rather than providing strict definitions of these terms. There is no standardized definition of autonomy in the context of resident clinical practice, and we did not measure direct educational outcomes. Our study design therefore allowed only for measurement of perceptions of these concepts. Measurement of actual educational value of the RRTfor example, through direct clinical observation or by incorporating the RRT experience into an entrustable professional activitywould provide more quantitative evidence of the RRT's utility for our resident population. Future study in this area would help to support the development and ongoing assessment of RRT‐based curricula moving forward.
CONCLUSION
Our data show that resident physicians have a strongly favorable opinion of the RRT at our institution. Future studies should aim to quantify the educational benefit of RRTs for residents and identify areas for curricular development to enhance resident education as RRTs become more pervasive.
Rapid response teams (RRTs) have been promoted by patient safety and quality‐improvement organizations as a strategy to reduce preventable in‐hospital deaths.[1] To date, critical analysis of RRTs has focused primarily on their impact on quality‐of‐care metrics.[2, 3, 4] Comparatively few studies have examined the cultural and educational impact of RRTs, particularly at academic medical centers, and those that do exist have focused almost exclusively on perceptions of nurses rather than resident physicians.[5, 6, 7, 8, 9, 10]
Although a prior study found that internal medicine and general surgery residents believed that RRTs improved patient safety, they were largely ambivalent about the RRT's impact on education and training.[11] To date, there has been no focused assessment of resident physician impressions of an RRT across years of training and medical specialty to inform the use of this multidisciplinary team as a component of their residency education.
We sought to determine whether resident physicians at a tertiary care academic medical center perceive educational benefit from collaboration with the RRT and whether they feel that the RRT adversely affects clinical autonomy.
METHODS
The Hospital
Moffitt‐Long Hospital, the tertiary academic medical center of the University of California, San Francisco (UCSF), is a 600‐bed acute care hospital that provides comprehensive critical care services and serves as a major referral center in northern California. There are roughly 5000 admissions to the hospital annually. At the time the study was conducted, there were approximately 200 RRT calls per 1000 adult hospital discharges.
The Rapid Response Team
The RRT is called to assess, triage, and treat patients who have experienced a decline in their clinical status short of a cardiopulmonary arrest. The RRT has been operational at UCSF since June 1, 2007, and is composed of a dedicated critical care nurse and respiratory therapist available 24 hours a day, 7 days a week. The RRT can be activated by any concerned staff member based on vital sign abnormalities, decreased urine output, changes in mental status, or any significant concern about the trajectory of the patient's clinical course.
When the RRT is called on a given patient, the patient's primary physician (at our institution, a resident) is also called to the bedside and works alongside the RRT to address the patient's acute clinical needs. The primary physician, bedside nurse, and RRT discuss the plan of care for the patient, including clinical evaluation, management, and the need for additional monitoring or a transition to a higher level of care. Residents at our institution receive no formal instruction regarding the role of the RRT or curriculum on interfacing with the RRT, and they do not serve as members of the RRT as part of a clinical rotation.
The Survey Process
Study subjects were asked via e‐mail to participate in a brief online survey. Subjects were offered the opportunity to win a $100 gift certificate in return for their participation. Weekly e‐mail reminders were sent for a period of 3 months or until a given subject had completed the survey. The survey was administered over a 3‐month period, from March through May, to allow time for residents to work with the RRT during the academic year. The Committee on Human Research at the University of California San Francisco Medical Center approved the study.
Target Population
All residents in specialties that involved direct patient care and the potential to use the adult RRT were included in the study. This included residents in the fields of internal medicine, neurology, general surgery, orthopedic surgery, neurosurgery, plastic surgery, urology, and otolaryngology (Table 1). Residents in pediatrics and obstetrics and gynecology were excluded, as emergencies in their patients are addressed by a pediatric RRT and an obstetric anesthesiologist, respectively. Residents in anesthesiology were excluded as they do not care for nonintensive care unit (ICU) patients as part of the primary team and are not involved in RRT encounters.
Demographic | No. (%) |
---|---|
| |
Medical specialty | |
Internal medicine | 145 (61.4) |
Neurology | 18 (7.6) |
General surgery | 31 (13.1) |
Orthopedic surgery | 17 (7.2) |
Neurosurgery | 4 (1.7) |
Plastic surgery | 2 (0.8) |
Urology | 9 (3.8) |
Otolaryngology | 10 (4.2) |
Years of postgraduate training | Average 2.34 (SD 1.41) |
1 | 83 (35.2) |
2 | 60 (25.4) |
3 | 55 (23.3) |
4 | 20 (8.5) |
5 | 8 (3.4) |
6 | 5 (2.1) |
7 | 5 (2.1) |
Gender | |
Male | 133 (56.4) |
Female | 102 (43.2) |
Had exposure to RRT during training | |
Yes | 106 (44.9) |
No | 127 (53.8) |
Had previously initiated a call to the RRT | |
Yes | 106 (44.9) |
No | 128 (54.2) |
Survey Design
The resident survey contained 20 RRT‐related items and 7 demographic and practice items. Responses for RRT‐related questions utilized a 5‐point Likert scale ranging from strongly disagree to strongly agree. The survey was piloted prior to administration to check comprehension and interpretation by physicians with experience in survey writing (for the full survey, see Supporting Information, Appendix, in the online version of this article).
Survey Objectives
The survey was designed to capture the experiences of residents who had cared for a patient for whom the RRT had been activated. Data collected included residents' perceptions of the impact of the RRT on their residency education and clinical autonomy, the quality of care provided, patient safety, and hospital‐wide culture. Potential barriers to use of the RRT were also examined.
Outcomes
The study's primary outcomes included the perceived educational benefit of the RRT and its perceived impact on clinical autonomy. Secondary outcomes included the effect of years of training and resident specialty on both the perceived educational benefit and impact on clinical autonomy among our study group.
Statistical Analysis
Responses to each survey item were described for each specialty, and subgroup analysis was conducted. For years of training, that item was dichotomized into either 1 year (henceforth referred to as interns) or greater than 1 year (henceforth referred to as upper‐level residents). Resident specialty was dichotomized into medical fields (internal medicine and neurology) or surgical fields. For statistical analysis, agreement statements were collapsed to either disagree (strongly disagree/disagree), neutral, or agree (strongly agree/agree). The influence of years of resident training and resident specialty was assessed for all items in the survey using 2 or Fisher exact tests as appropriate for the 3 agreement categories. Analysis was conducted using SPSS 21.0 (IBM Corp., Armonk, NY).
RESULTS
There were 246 responses to the survey of a possible 342, yielding a response rate of 72% (Table 2). Ten respondents stated that they had never cared for a patient where the RRT had been activated. Given their lack of exposure to the RRT, these respondents were excluded from the analysis, yielding a final sample size of 236. The demographic and clinical practice characteristics of respondents are shown in Table 1.
The resident | Strongly Disagree/Disagree, n (%) | Neutral, n (%) | Agree/ Strongly Agree, n (%) |
---|---|---|---|
| |||
Is comfortable managing the unstable patient without the RRT | 104 (44.1) | 64 (27.1) | 66 (28.0) |
And RRT work together to make treatment decisions | 10 (4.2) | 13 (5.5) | 208 (88.1) |
Believes there are fewer opportunities to care for unstable floor patients due to the RRT | 188 (79.7) | 26 (11.0) | 17 (7.2) |
Feels less prepared to care for unstable patients due to the RRT | 201 (85.2) | 22 (9.3) | 13 (5.5) |
Feels that working with the RRT creates a valuable educational experience | 9 (3.8) | 39 (16.5) | 184 (78.0) |
Feels that nurses caring for the unstable patient should always contact them prior to contacting the RRT | 123 (52.1) | 33 (14.0) | 76 (32.2) |
Would be unhappy with nurses calling RRT prior to contacting them | 141 (59.7) | 44 (18.6) | 51 (21.6) |
Perceives that the presence of RRT decreases residents' autonomy | 179 (75.8) | 25 (10.6) | 28 (11.9) |
Demographics and Primary Outcomes
Interns comprised 83 (35%) of the respondents; the average time in postgraduate training was 2.34 years (standard deviation=1.41). Of respondents, 163 (69%) were in medical fields, and 73 (31%) were in surgical fields. Overall responses to the survey are shown in Table 2, and subgroup analysis is shown in Table 3.
The resident | 1 Year, n=83, n (%) | >1 Year, n=153, n (%) | P Value | Medical, n=163, n (%) | Surgical, n=73, n (%) | P Value |
---|---|---|---|---|---|---|
| ||||||
Is comfortable managing the unstable patient without the RRT | 0.01 | <0.01 | ||||
Strongly disagree/disagree | 39 (47.6) | 65 (42.8) | 67 (41.6) | 37 (50.7) | ||
Neutral | 29 (35.4) | 35 (23.0) | 56 (34.8) | 8 (11.0) | ||
Agree/strongly agree | 14 (17.1) | 52 (34.2) | 38 (23.6) | 28 (38.4) | ||
And RRT work together to make treatment decisions | 0.61 | 0.04 | ||||
Strongly disagree/disagree | 2 (2.4) | 8 (5.4) | 4 (2.5) | 6 (8.7) | ||
Neutral | 5 (6.1) | 8 (5.4) | 7 (4.3) | 6 (8.7) | ||
Agree/strongly agree | 75 (91.5) | 137 (89.3) | 151 (93.2) | 57 (82.6) | ||
Believes there are fewer opportunities to care for unstable floor patients due to the RRT | 0.05 | 0.04 | ||||
Strongly disagree/disagree | 59 (72.8) | 129 (86.0) | 136 (85.5) | 52 (72.2) | ||
Neutral | 13 (16.0) | 13 (8.7) | 15 (9.4) | 11 (15.3) | ||
Agree/strongly agree | 9 (11.1) | 8 (5.3) | 8 (5.0) | 9 (12.5) | ||
Feels less prepared to care for unstable patients due to the RRT | <0.01 | 0.79 | ||||
Strongly disagree/disagree | 62 (74.7) | 139 (90.8) | 140 (85.9) | 61 (83.6) | ||
Neutral | 14 (16.9) | 8 (5.2) | 15 (9.2) | 7 (9.6) | ||
Agree/Strongly agree | 7 (8.4) | 6 (3.9) | 8 (4.9) | 5 (6.8) | ||
Feels working with the RRT is a valuable educational experience | 0.61 | 0.01 | ||||
Strongly disagree/disagree | 2 (2.4) | 7 (4.7) | 2 (1.2) | 7 (9.9) | ||
Neutral | 12 (14.6) | 27 (18.0) | 25 (15.5) | 14 (19.7) | ||
Agree/strongly agree | 68 (82.9) | 116 (77.3) | 134 (83.2) | 50 (70.4) | ||
Feels nurses caring for unstable patients should always contact the resident prior to contacting the RRT | 0.49 | <0.01 | ||||
Strongly disagree/disagree | 47 (57.3) | 76 (50.7) | 97 (60.2) | 26 (36.6) | ||
Neutral | 9 (11.0) | 24 (16.0) | 26 (16.1) | 7 (9.9) | ||
Agree/strongly agree | 26 (31.7) | 50 (33.3) | 38 (23.6) | 38 (53.5) | ||
Would be unhappy with nurses calling RRT prior to contacting them | 0.81 | <0.01 | ||||
Strongly disagree/disagree | 51 (61.4) | 90 (58.8) | 109 (66.9) | 32 (43.8) | ||
Neutral | 16 (19.3) | 28 (18.3) | 30 (18.4) | 14 (19.2) | ||
Agree/strongly agree | 16 (19.3) | 35 (22.9) | 24 (14.7) | 27 (37.0) | ||
Perceives that the presence of the RRT decreases autonomy as a physician | 0.95 | 0.18 | ||||
Strongly disagree/disagree | 63 (77.8) | 116 (76.8) | 127 (79.9) | 52 (71.2) | ||
Neutral | 9 (11.1) | 16 (10.6) | 17 (10.7) | 8 (11.0) | ||
Agree/strongly agree | 9 (11.1) | 19 (12.6) | 15 (9.4) | 13 (17.8) |
Effect of the RRT on Resident Education
Of all residents, 66 (28%) agreed that they felt comfortable managing an unstable patient without the assistance of the RRT. Surgical residents felt more comfortable managing an unstable patient alone (38%) compared medical residents (24%) (P<0.01). Interns felt less comfortable caring for unstable patients without the RRT's assistance (17%) compared with upper‐level residents (34%) (P=0.01).
Residents overall disagreed with the statement that the RRT left them feeling less prepared to care for unstable patients (n=201; 85%). More upper‐level residents disagreed with this assertion (91%) compared with interns (75%) (P<0.01). Responses to this question did not differ significantly between medical and surgical residents.
Upper‐level residents were more likely to disagree with the statement that the RRT resulted in fewer opportunities to care for unstable patients (n=129; 86%) compared with interns (n=59; 73%) (P=0.05). Medical residents were also more likely to disagree with this statement (n=136; 86%) compared with surgical residents (n=52; 72%) (P=0.04).
With respect to residents' overall impressions of the educational value of the RRT, 68 (83%) interns and 116 (77%) upper‐level residents agreed that it provided a valuable educational experience (P=0.61). Medical and surgical residents differed in this regard, with 134 (83%) medical residents and 50 (70%) surgical residents agreeing that the RRT provided a valuable educational experience (P=0.01).
Effect of the RRT on Clinical Autonomy
Of all residents, 123 (52%) disagreed that the bedside nurse should always contact the primary resident prior to calling the RRT; 76 (32%) agreed with this statement. Medicine residents were more likely to disagree with this approach (n=97; 60%) than were surgical residents (n=26; 36%) (P<0.01). There was no difference between interns and upper‐level residents in response to this question. Most of those who disagreed with this statement were medical residents, whereas most surgical residents (n=38; 54%) agreed that they should be contacted first (P<0.01).
There were no differences between interns and upper‐level residents with respect to perceptions of the RRT's impact on clinical autonomy: 11% of interns and 13% of residents agreed that the RRT decreased their clinical autonomy as a physician. There was no significant difference between medical and surgical residents' responses to this question.
The majority of residents (n=208; 88%) agreed that they and the RRT work together to make treatment decisions for patients. This was true regardless of year of training (P=0.61), but it was expressed more often among medical residents than surgical residents (n=151, 93% vs n=57, 83%; P=0.04).
DISCUSSION
Most studies examining the educational and cultural impact of RRTs exist in the nursing literature. These studies demonstrate that medical and surgical nurses are often reluctant to call the RRT for fear of criticism by the patient's physician.[5, 8, 9, 10, 11, 12, 13] In contrast, our data demonstrate that resident physicians across all levels of training and specialties have a positive view of the RRT and its role in patient care. The data support our hypothesis that although most residents perceive educational benefit from their interactions with the RRT, this perception is greater for less‐experienced residents and for those residents who routinely provide care for critically ill patients and serve as code team leaders. In addition, a minority of residents, irrespective of years of training or medical specialty, felt that the RRT negatively impacted their clinical autonomy.
Our data have several important implications. First, although over half of the residents surveyed had not been exposed to RRTs during medical school, and despite having no formal training on the role of the RRT during residency, most residents identified their interactions with the RRT as potential learning opportunities. This finding differs from that of Benin and colleagues, who suggested that RRTs might negatively impact residents' educational development and decrease opportunities for high‐stakes clinical reasoning by allowing the clinical decision‐making process to be driven by the RRT staff rather than the resident.[5] One possible explanation for this discrepancy is the variable makeup of the RRT at different institutions. At our medical center, the RRT is comprised of a critical care nurse and respiratory therapist, whereas at other institutions, the RRT may be led by a resident, fellow, attending hospitalist, or intensivist, any of whom might supersede the primary resident once the RRT is engaged.
In our study, the perceived educational benefit of the RRT was most pronounced with interns. Interns likely derive incrementally greater benefit from each encounter with an acutely decompensating patient than do senior residents, whether the RRT is present or not. Observing the actions of seasoned nurses and respiratory therapists may demonstrate new tools for interns to use in their management of such situations; for example, the RRT may suggest different modes of oxygen delivery or new diagnostic tests. The RRT also likely helps interns navigate the hospital system by assisting with decisions around escalation of care and serving as a liaison to ICU staff.
Our data also have implications for resident perceptions of clinical autonomy. Interns, far less experienced caring for unstable patients than upper‐level residents, expressed more concern about the RRT stripping them of opportunities to do so and about feeling less prepared to handle clinically deteriorating patients. Part of this perception may be due to interns feeling less comfortable taking charge of a patient's care in the presence of an experienced critical care nurse and respiratory therapist, both for reasons related to clinical experience and to a cultural hierarchy that often places the intern at the bottom of the authority spectrum. In addition, when the RRT is called on an intern's patient, the senior resident may accompany the intern to the bedside and guide the intern on his or her approach to the situation; in some cases, the senior resident may take charge, leaving the intern feeling less autonomous.
If training sessions could be developed to address not only clinical decision making, but also multidisciplinary team interactions and roles in the acute care setting, this may mitigate interns' concerns. Such curricula could also enhance residents' experience in interprofessional care, an aspect of clinical training that has become increasingly important in the age of limited duty hours and higher volume, and higher acuity inpatient censuses. An RRT model, like a code blue model, could be used in simulation‐based training to increase both comfort with use of the RRT and efficiency of the RRTresidentnurse team. Although our study did not address specifically residents' perceptions of multidisciplinary teams, this could be a promising area for further study.
For surgical residents, additional factors are likely at play. Surgical residents spend significant time in the operating room, reducing time present at the bedside and hindering the ability to respond swiftly when an RRT is called on their patient. This could cause surgical residents to feel less involved in the care of that patientsupported by our finding that fewer surgical residents felt able to collaborate with the RRTand also to derive less educational benefit and clinical satisfaction from the experience. Differences between medical and surgical postgraduate training also likely play a role, manifest by varying clinical roles and duration of training, and as such it may not be appropriate to draw direct comparisons between respective postgraduate year levels. In addition, differences in patients' medical complexity, varying allegiance to the traditional hierarchy of medical providers, and degree of familiarity with the RRT itself may impact surgical residents' comfort with the RRT.
Limitations of our study include that it was conducted at a single site and addressed a specific population of residents at our tertiary academic center. Though we achieved an excellent response rate, our subspecialty sample sizes were too small to allow for individual comparisons among those groups. Conducting a larger study at multiple institutions where the makeup of the RRT differs could provide further insight into how different clinical environments and different RRT models impact resident perceptions. Finally, we allowed each respondent to interpret both educational benefit and clinical autonomy in the context of their own level of training and clinical practice rather than providing strict definitions of these terms. There is no standardized definition of autonomy in the context of resident clinical practice, and we did not measure direct educational outcomes. Our study design therefore allowed only for measurement of perceptions of these concepts. Measurement of actual educational value of the RRTfor example, through direct clinical observation or by incorporating the RRT experience into an entrustable professional activitywould provide more quantitative evidence of the RRT's utility for our resident population. Future study in this area would help to support the development and ongoing assessment of RRT‐based curricula moving forward.
CONCLUSION
Our data show that resident physicians have a strongly favorable opinion of the RRT at our institution. Future studies should aim to quantify the educational benefit of RRTs for residents and identify areas for curricular development to enhance resident education as RRTs become more pervasive.
- Institute for Healthcare Improvement. Rapid response teams. Available at: http://www.ihi.org/topics/rapidresponseteams. Accessed May 5, 2014.
- Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):18–26. , , , , .
- Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):2463–2478. , , , et al.
- Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):1238–1243. , , , , , .
- Defining impact of a rapid response team: qualitative study with nurses, physicians and hospital administrators. BMJ Qual Saf. 2012;21(5):391–398. , , , , .
- How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):e13. , , .
- Rapid response team approach to staff satisfaction. Orthop Nurs. 2008;27(5):266–271; quiz 272–273. , , , .
- Voices from the floor: nurses' perceptions of the medical emergency team. Intensive Crit Care Nurs. 2006;22(3):138–143. , , , .
- Rapid response teams seen through the eyes of the nurse. Am J Nurs. 2010;110(6):28–34; quiz 35–36. , , .
- What stops hospital clinical staff from following protocols? An analysis of the incidence and factors behind the failure of bedside clinical staff to activate the rapid response system in a multi‐campus Australian metropolitan healthcare service. BMJ Qual Saf. 2012;21(7):569–575. , , , et al.
- Resident and RN perceptions of the impact of a medical emergency team on education and patient safety in an academic medical center. Crit Care Med. 2009;37(12):3091–3096. , , , et al.
- Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39. , , , et al.
- Timing and teamwork–an observational pilot study of patients referred to a rapid response team with the aim of identifying factors amenable to re‐design of a rapid response system. Resuscitation. 2012;83(6):782–787. , , , .
- Institute for Healthcare Improvement. Rapid response teams. Available at: http://www.ihi.org/topics/rapidresponseteams. Accessed May 5, 2014.
- Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):18–26. , , , , .
- Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):2463–2478. , , , et al.
- Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):1238–1243. , , , , , .
- Defining impact of a rapid response team: qualitative study with nurses, physicians and hospital administrators. BMJ Qual Saf. 2012;21(5):391–398. , , , , .
- How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):e13. , , .
- Rapid response team approach to staff satisfaction. Orthop Nurs. 2008;27(5):266–271; quiz 272–273. , , , .
- Voices from the floor: nurses' perceptions of the medical emergency team. Intensive Crit Care Nurs. 2006;22(3):138–143. , , , .
- Rapid response teams seen through the eyes of the nurse. Am J Nurs. 2010;110(6):28–34; quiz 35–36. , , .
- What stops hospital clinical staff from following protocols? An analysis of the incidence and factors behind the failure of bedside clinical staff to activate the rapid response system in a multi‐campus Australian metropolitan healthcare service. BMJ Qual Saf. 2012;21(7):569–575. , , , et al.
- Resident and RN perceptions of the impact of a medical emergency team on education and patient safety in an academic medical center. Crit Care Med. 2009;37(12):3091–3096. , , , et al.
- Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39. , , , et al.
- Timing and teamwork–an observational pilot study of patients referred to a rapid response team with the aim of identifying factors amenable to re‐design of a rapid response system. Resuscitation. 2012;83(6):782–787. , , , .
© 2015 Society of Hospital Medicine
Paclitaxel-Associated Melanonychia
To the Editor:
Taxane-based chemotherapy including paclitaxel and docetaxel is commonly used to treat solid tumor malignancies including lung, breast, ovarian, and bladder cancers.1 Taxanes work by interrupting normal microtubule function by inducing tubulin polymerization and inhibiting microtubule depolymerization, thereby leading to cell cycle arrest at the gap 2 (premitotic) and mitotic phase and the blockade of cell division.2
Cutaneous side effects have been reported with taxane-based therapies, including alopecia, skin rash and erythema, and desquamation of the hands and feet (hand-foot syndrome).3 Nail changes also have been reported to occur in 0% to 44% of treated patients,4 with one study reporting an incidence as high as 50.5%.5 Nail abnormalities that have been described primarily include onycholysis, and less frequently Beau lines, subungual hemorrhagic bullae, subungual hyperkeratosis, splinter hemorrhages, acute paronychia, and pigmentary changes such as nail bed dyschromia. Among the taxanes, nail abnormalities are more commonly seen with docetaxel; few reports address paclitaxel-induced nail changes.4 Onycholysis, diffuse fingernail orange discoloration, Beau lines, subungual distal hyperkeratosis, and brown discoloration of 3 fingernail beds sparing the lunula have been reported with paclitaxel.6-9 We report a unique case of paclitaxel-associated melanonychia.
A 54-year-old black woman with a history of multiple myeloma and breast cancer who was being treated with paclitaxel for breast cancer presented with nail changes including nail darkening since initiating paclitaxel. She was diagnosed with multiple myeloma in 2010 and received bortezomib, dexamethasone, and an autologous stem cell transplant in August 2011. She never achieved complete remission but had been on lenalidomide with stable disease. She underwent a lumpectomy in December 2012, which revealed intraductal carcinoma with ductal carcinoma in situ that was estrogen receptor and progesterone receptor negative and ERBB2 (formerly HER2) positive. She was started on weekly paclitaxel (80 mg/m2) to complete 12 cycles and trastuzumab (6 mg/kg) every 3 weeks. While on paclitaxel, she developed grade 2 neuropathy of the hands, leading to subsequent dose reduction at week 9. She denied any other changes to her medications. On clinical examination she had diffuse and well-demarcated, brown-black, longitudinal and transverse bands beginning at the proximal nail plate and progressing distally, with onycholysis involving all 20 nails (Figure, A and B). A nail clipping of the right hallux nail was sent for analysis. Pathology results showed evidence of scattered clusters of brown melanin pigment in the nail plate. Periodic acid–Schiff staining revealed numerous yeasts at the nail base but no infiltrating hyphae. Iron stain was negative for hemosiderin. The right index finger was injected with triamcinolone acetonide to treat the onycholysis. Four months after completing the paclitaxel, she began to notice lightening of the nails and improvement of the onycholysis in all nails (Figure, C and D).
![]() | ![]() | |
![]() | ![]() |
Initial appearance of diffuse, well-demarcated, brown-black, longitudinal and transverse bands beginning at the proximal nail plate and progressing distally, with onycholysis in the nails on the right hand (A) and left hand (B). Four months after completing paclitaxel, the patient began to notice lightening of the nails and improvement of the onycholysis in the nails on the right hand (C) and left hand (D). |
The highly proliferating cells that comprise the nail matrix epithelium mature, differentiate, and keratinize to form the nail plate and are susceptible to the antimitotic effects of systemic chemotherapy. As a result, systemic chemotherapies may lead to abnormal nail plate production and keratinization of the nail plate, causing the clinical manifestations of Beau lines, onychomadesis, and leukonychia.10
Melanonychia is the development of melanin pigmentation of the nail plate and is typically caused by matrix melanin deposition through the activation of nail matrix melanocytes. There are 3 patterns of melanonychia: longitudinal, transverse, and diffuse. A single nail plate can involve more than one pattern of melanonychia and several nails may be affected. Longitudinal melanonychia typically develops from the activation of a group of melanocytes in the nail matrix, while diffuse pigmentation arises from diffuse melanocyte activation.11 Longitudinal melanonychia is common in darker-pigmented individuals12 and can be associated with systemic diseases.10 Transverse melanonychia has been reported in association with medications including many chemotherapy agents, and each band of transverse melanonychia may correspond to a cycle of therapy.11 Drug-induced melanonychia can affect several nails and tends to resolve after completion of therapy. Melanonychia has previously been described with vincristine, doxorubicin, hydroxyurea, cyclophosphamide, 5-fluorouracil, bleomycin, dacarbazine, methotrexate, and electron beam therapy.11 Nail pigmentation changes have been reported with docetaxel; a patient developed blue discoloration on the right and left thumb lunulae that improved 3 months after discontinuation of docetaxel therapy.13 While on docetaxel, another patient developed acral erythema, onycholysis, and longitudinal melanonychia in photoexposed areas, which was thought to be secondary to possible photosensitization.14 Possible explanations for paclitaxel-induced melanonychia include a direct toxic effect on the nail bed or nail matrix, focal stimulation of nail matrix melanocytes, or photosensitization. Drug-induced melanonychia commonly appears 3 to 8 weeks after drug intake and typically resolves 6 to 8 weeks after drug discontinuation.15
Predictors of taxane-related nail changes have been studied.5 Taxane-induced nail toxicity was more prevalent in patients who were female, had a history of diabetes mellitus, had received capecitabine with docetaxel, and had a diagnosis of breast or gynecological cancer. The nail changes increased with greater number of taxane cycles administered, body mass index, and severity of treatment-related neuropathy.5 Although nail changes often are temporary and typically resolve with drug withdrawal, they may persist in some patients.16 Possible measures have been proposed to prevent taxane-induced nail toxicity including frozen gloves,17 nail cutting, and avoiding potential fingernail irritants.18
It is possible that the nails of our darker-skinned patient may have been affected by some degree of melanonychia prior to starting the therapy, which cannot be ruled out. However, according to the patient, she only noticed the change after starting paclitaxel, raising the possibility of either new, worsening, or more diffuse involvement following initiation of paclitaxel therapy. Additionally, she was receiving weekly administration of paclitaxel and experienced severe neuropathy, both predictors of nail toxicity.5 No reports of melanonychia from lenalidomide have been reported in the literature indexed for MEDLINE. Although these nail changes are not life threatening, clinicians should be aware of these side effects, as they are cosmetically distressing to many patients and can impact quality of life.19
1. Crown J, O’Leary M. The taxanes: an update. Lancet. 2000;356:507-508.
2. Schiff PB, Fant J, Horwitz SB. Promotion of microtubule assembly in vitro by Taxol. Nature. 1979;277:665-667.
3. Heidary N, Naik H, Burgin S. Chemotherapeutic agents and the skin: an update. J Am Acad Dermatol. 2008;58:545-570.
4. Minisini AM, Tosti A, Sobrero AF, et al. Taxane-induced nail changes: incidence, clinical presentation and outcome. Ann Oncol. 2003;14:333-337.
5. Can G, Aydiner A, Cavdar I. Taxane-induced nail changes: predictors and efficacy of the use of frozen gloves and socks in the prevention of nail toxicity. Eur J Oncol Nurs. 2012;16:270-275.
6. Lüftner D, Flath B, Akrivakis C, et al. Dose-intensified weekly paclitaxel induces multiple nail disorders. Ann Oncol. 1998;9:1139-1141.
7. Hussain S, Anderson DN, Salvatti ME, et al. Onycholysis as a complication of systemic chemotherapy. report of five cases associated with prolonged weekly paclitaxel therapy and review of the literature. Cancer. 2000;88:2367-2371.
8. Almagro M, Del Pozo J, Garcia-Silva J, et al. Nail alterations secondary to paclitaxel therapy. Eur J Dermatol. 2000;10:146-147.
9. Flory SM, Solimando DA Jr, Webster GF, et al. Onycholysis associated with weekly administration of paclitaxel. Ann Pharmacother. 1999;33:584-586.
10. Hinds G, Thomas VD. Malignancy and cancer treatment-related hair and nail changes. Dermatol Clin. 2008;26:59-68.
11. Gilbar P, Hain A, Peereboom VM. Nail toxicity induced by cancer chemotherapy. J Oncol Pharm Practice. 2009;15:143-55.
12. Buka R, Friedman KA, Phelps RG, et al. Childhood longitudinal melanonychia: case reports and review of the literature. Mt Sinai J Med. 2001;68:331-335.
13. Halvorson CR, Erickson CL, Gaspari AA. A rare manifestation of nail changes with docetaxel therapy. Skinmed. 2010;8:179-180.
14. Ferreira O, Baudrier T, Mota A, et al. Docetaxel-induced acral erythema and nail changes distributed to photoexposed areas. Cutan Ocul Toxicol. 2010;29:296-299.
15. Piraccini BM, Iorizzo M. Drug reactions affecting the nail unit: diagnosis and management. Dermatol Clin. 2007;25:215-221.
16. Piraccini BM, Tosti A. Drug-induced nail disorders: incidence, management and prognosis. Drug Saf. 1999;21:187-201.
17. Scotté F, Tourani JM, Banu E, et al. Multicenter study of a frozen glove to prevent docetaxel-induced onycholysis and cutaneous toxicity of the hand. J Clin Oncol. 2005;23:4424-4429.
18. Gilbar P, Hain A, Peereboom VM. Nail toxicity induced by cancer chemotherapy. J Oncol Pharm Pract. 2009;15:143-155.
19. Hackbarth M, Haas N, Fotopoulou C, et al. Chemotherapy-induced dermatological toxicity: frequencies and impact on quality of life in women’s cancers. results of a prospective study. Support Care Cancer. 2008;16:267-273.
To the Editor:
Taxane-based chemotherapy including paclitaxel and docetaxel is commonly used to treat solid tumor malignancies including lung, breast, ovarian, and bladder cancers.1 Taxanes work by interrupting normal microtubule function by inducing tubulin polymerization and inhibiting microtubule depolymerization, thereby leading to cell cycle arrest at the gap 2 (premitotic) and mitotic phase and the blockade of cell division.2
Cutaneous side effects have been reported with taxane-based therapies, including alopecia, skin rash and erythema, and desquamation of the hands and feet (hand-foot syndrome).3 Nail changes also have been reported to occur in 0% to 44% of treated patients,4 with one study reporting an incidence as high as 50.5%.5 Nail abnormalities that have been described primarily include onycholysis, and less frequently Beau lines, subungual hemorrhagic bullae, subungual hyperkeratosis, splinter hemorrhages, acute paronychia, and pigmentary changes such as nail bed dyschromia. Among the taxanes, nail abnormalities are more commonly seen with docetaxel; few reports address paclitaxel-induced nail changes.4 Onycholysis, diffuse fingernail orange discoloration, Beau lines, subungual distal hyperkeratosis, and brown discoloration of 3 fingernail beds sparing the lunula have been reported with paclitaxel.6-9 We report a unique case of paclitaxel-associated melanonychia.
A 54-year-old black woman with a history of multiple myeloma and breast cancer who was being treated with paclitaxel for breast cancer presented with nail changes including nail darkening since initiating paclitaxel. She was diagnosed with multiple myeloma in 2010 and received bortezomib, dexamethasone, and an autologous stem cell transplant in August 2011. She never achieved complete remission but had been on lenalidomide with stable disease. She underwent a lumpectomy in December 2012, which revealed intraductal carcinoma with ductal carcinoma in situ that was estrogen receptor and progesterone receptor negative and ERBB2 (formerly HER2) positive. She was started on weekly paclitaxel (80 mg/m2) to complete 12 cycles and trastuzumab (6 mg/kg) every 3 weeks. While on paclitaxel, she developed grade 2 neuropathy of the hands, leading to subsequent dose reduction at week 9. She denied any other changes to her medications. On clinical examination she had diffuse and well-demarcated, brown-black, longitudinal and transverse bands beginning at the proximal nail plate and progressing distally, with onycholysis involving all 20 nails (Figure, A and B). A nail clipping of the right hallux nail was sent for analysis. Pathology results showed evidence of scattered clusters of brown melanin pigment in the nail plate. Periodic acid–Schiff staining revealed numerous yeasts at the nail base but no infiltrating hyphae. Iron stain was negative for hemosiderin. The right index finger was injected with triamcinolone acetonide to treat the onycholysis. Four months after completing the paclitaxel, she began to notice lightening of the nails and improvement of the onycholysis in all nails (Figure, C and D).
![]() | ![]() | |
![]() | ![]() |
Initial appearance of diffuse, well-demarcated, brown-black, longitudinal and transverse bands beginning at the proximal nail plate and progressing distally, with onycholysis in the nails on the right hand (A) and left hand (B). Four months after completing paclitaxel, the patient began to notice lightening of the nails and improvement of the onycholysis in the nails on the right hand (C) and left hand (D). |
The highly proliferating cells that comprise the nail matrix epithelium mature, differentiate, and keratinize to form the nail plate and are susceptible to the antimitotic effects of systemic chemotherapy. As a result, systemic chemotherapies may lead to abnormal nail plate production and keratinization of the nail plate, causing the clinical manifestations of Beau lines, onychomadesis, and leukonychia.10
Melanonychia is the development of melanin pigmentation of the nail plate and is typically caused by matrix melanin deposition through the activation of nail matrix melanocytes. There are 3 patterns of melanonychia: longitudinal, transverse, and diffuse. A single nail plate can involve more than one pattern of melanonychia and several nails may be affected. Longitudinal melanonychia typically develops from the activation of a group of melanocytes in the nail matrix, while diffuse pigmentation arises from diffuse melanocyte activation.11 Longitudinal melanonychia is common in darker-pigmented individuals12 and can be associated with systemic diseases.10 Transverse melanonychia has been reported in association with medications including many chemotherapy agents, and each band of transverse melanonychia may correspond to a cycle of therapy.11 Drug-induced melanonychia can affect several nails and tends to resolve after completion of therapy. Melanonychia has previously been described with vincristine, doxorubicin, hydroxyurea, cyclophosphamide, 5-fluorouracil, bleomycin, dacarbazine, methotrexate, and electron beam therapy.11 Nail pigmentation changes have been reported with docetaxel; a patient developed blue discoloration on the right and left thumb lunulae that improved 3 months after discontinuation of docetaxel therapy.13 While on docetaxel, another patient developed acral erythema, onycholysis, and longitudinal melanonychia in photoexposed areas, which was thought to be secondary to possible photosensitization.14 Possible explanations for paclitaxel-induced melanonychia include a direct toxic effect on the nail bed or nail matrix, focal stimulation of nail matrix melanocytes, or photosensitization. Drug-induced melanonychia commonly appears 3 to 8 weeks after drug intake and typically resolves 6 to 8 weeks after drug discontinuation.15
Predictors of taxane-related nail changes have been studied.5 Taxane-induced nail toxicity was more prevalent in patients who were female, had a history of diabetes mellitus, had received capecitabine with docetaxel, and had a diagnosis of breast or gynecological cancer. The nail changes increased with greater number of taxane cycles administered, body mass index, and severity of treatment-related neuropathy.5 Although nail changes often are temporary and typically resolve with drug withdrawal, they may persist in some patients.16 Possible measures have been proposed to prevent taxane-induced nail toxicity including frozen gloves,17 nail cutting, and avoiding potential fingernail irritants.18
It is possible that the nails of our darker-skinned patient may have been affected by some degree of melanonychia prior to starting the therapy, which cannot be ruled out. However, according to the patient, she only noticed the change after starting paclitaxel, raising the possibility of either new, worsening, or more diffuse involvement following initiation of paclitaxel therapy. Additionally, she was receiving weekly administration of paclitaxel and experienced severe neuropathy, both predictors of nail toxicity.5 No reports of melanonychia from lenalidomide have been reported in the literature indexed for MEDLINE. Although these nail changes are not life threatening, clinicians should be aware of these side effects, as they are cosmetically distressing to many patients and can impact quality of life.19
To the Editor:
Taxane-based chemotherapy including paclitaxel and docetaxel is commonly used to treat solid tumor malignancies including lung, breast, ovarian, and bladder cancers.1 Taxanes work by interrupting normal microtubule function by inducing tubulin polymerization and inhibiting microtubule depolymerization, thereby leading to cell cycle arrest at the gap 2 (premitotic) and mitotic phase and the blockade of cell division.2
Cutaneous side effects have been reported with taxane-based therapies, including alopecia, skin rash and erythema, and desquamation of the hands and feet (hand-foot syndrome).3 Nail changes also have been reported to occur in 0% to 44% of treated patients,4 with one study reporting an incidence as high as 50.5%.5 Nail abnormalities that have been described primarily include onycholysis, and less frequently Beau lines, subungual hemorrhagic bullae, subungual hyperkeratosis, splinter hemorrhages, acute paronychia, and pigmentary changes such as nail bed dyschromia. Among the taxanes, nail abnormalities are more commonly seen with docetaxel; few reports address paclitaxel-induced nail changes.4 Onycholysis, diffuse fingernail orange discoloration, Beau lines, subungual distal hyperkeratosis, and brown discoloration of 3 fingernail beds sparing the lunula have been reported with paclitaxel.6-9 We report a unique case of paclitaxel-associated melanonychia.
A 54-year-old black woman with a history of multiple myeloma and breast cancer who was being treated with paclitaxel for breast cancer presented with nail changes including nail darkening since initiating paclitaxel. She was diagnosed with multiple myeloma in 2010 and received bortezomib, dexamethasone, and an autologous stem cell transplant in August 2011. She never achieved complete remission but had been on lenalidomide with stable disease. She underwent a lumpectomy in December 2012, which revealed intraductal carcinoma with ductal carcinoma in situ that was estrogen receptor and progesterone receptor negative and ERBB2 (formerly HER2) positive. She was started on weekly paclitaxel (80 mg/m2) to complete 12 cycles and trastuzumab (6 mg/kg) every 3 weeks. While on paclitaxel, she developed grade 2 neuropathy of the hands, leading to subsequent dose reduction at week 9. She denied any other changes to her medications. On clinical examination she had diffuse and well-demarcated, brown-black, longitudinal and transverse bands beginning at the proximal nail plate and progressing distally, with onycholysis involving all 20 nails (Figure, A and B). A nail clipping of the right hallux nail was sent for analysis. Pathology results showed evidence of scattered clusters of brown melanin pigment in the nail plate. Periodic acid–Schiff staining revealed numerous yeasts at the nail base but no infiltrating hyphae. Iron stain was negative for hemosiderin. The right index finger was injected with triamcinolone acetonide to treat the onycholysis. Four months after completing the paclitaxel, she began to notice lightening of the nails and improvement of the onycholysis in all nails (Figure, C and D).
![]() | ![]() | |
![]() | ![]() |
Initial appearance of diffuse, well-demarcated, brown-black, longitudinal and transverse bands beginning at the proximal nail plate and progressing distally, with onycholysis in the nails on the right hand (A) and left hand (B). Four months after completing paclitaxel, the patient began to notice lightening of the nails and improvement of the onycholysis in the nails on the right hand (C) and left hand (D). |
The highly proliferating cells that comprise the nail matrix epithelium mature, differentiate, and keratinize to form the nail plate and are susceptible to the antimitotic effects of systemic chemotherapy. As a result, systemic chemotherapies may lead to abnormal nail plate production and keratinization of the nail plate, causing the clinical manifestations of Beau lines, onychomadesis, and leukonychia.10
Melanonychia is the development of melanin pigmentation of the nail plate and is typically caused by matrix melanin deposition through the activation of nail matrix melanocytes. There are 3 patterns of melanonychia: longitudinal, transverse, and diffuse. A single nail plate can involve more than one pattern of melanonychia and several nails may be affected. Longitudinal melanonychia typically develops from the activation of a group of melanocytes in the nail matrix, while diffuse pigmentation arises from diffuse melanocyte activation.11 Longitudinal melanonychia is common in darker-pigmented individuals12 and can be associated with systemic diseases.10 Transverse melanonychia has been reported in association with medications including many chemotherapy agents, and each band of transverse melanonychia may correspond to a cycle of therapy.11 Drug-induced melanonychia can affect several nails and tends to resolve after completion of therapy. Melanonychia has previously been described with vincristine, doxorubicin, hydroxyurea, cyclophosphamide, 5-fluorouracil, bleomycin, dacarbazine, methotrexate, and electron beam therapy.11 Nail pigmentation changes have been reported with docetaxel; a patient developed blue discoloration on the right and left thumb lunulae that improved 3 months after discontinuation of docetaxel therapy.13 While on docetaxel, another patient developed acral erythema, onycholysis, and longitudinal melanonychia in photoexposed areas, which was thought to be secondary to possible photosensitization.14 Possible explanations for paclitaxel-induced melanonychia include a direct toxic effect on the nail bed or nail matrix, focal stimulation of nail matrix melanocytes, or photosensitization. Drug-induced melanonychia commonly appears 3 to 8 weeks after drug intake and typically resolves 6 to 8 weeks after drug discontinuation.15
Predictors of taxane-related nail changes have been studied.5 Taxane-induced nail toxicity was more prevalent in patients who were female, had a history of diabetes mellitus, had received capecitabine with docetaxel, and had a diagnosis of breast or gynecological cancer. The nail changes increased with greater number of taxane cycles administered, body mass index, and severity of treatment-related neuropathy.5 Although nail changes often are temporary and typically resolve with drug withdrawal, they may persist in some patients.16 Possible measures have been proposed to prevent taxane-induced nail toxicity including frozen gloves,17 nail cutting, and avoiding potential fingernail irritants.18
It is possible that the nails of our darker-skinned patient may have been affected by some degree of melanonychia prior to starting the therapy, which cannot be ruled out. However, according to the patient, she only noticed the change after starting paclitaxel, raising the possibility of either new, worsening, or more diffuse involvement following initiation of paclitaxel therapy. Additionally, she was receiving weekly administration of paclitaxel and experienced severe neuropathy, both predictors of nail toxicity.5 No reports of melanonychia from lenalidomide have been reported in the literature indexed for MEDLINE. Although these nail changes are not life threatening, clinicians should be aware of these side effects, as they are cosmetically distressing to many patients and can impact quality of life.19
1. Crown J, O’Leary M. The taxanes: an update. Lancet. 2000;356:507-508.
2. Schiff PB, Fant J, Horwitz SB. Promotion of microtubule assembly in vitro by Taxol. Nature. 1979;277:665-667.
3. Heidary N, Naik H, Burgin S. Chemotherapeutic agents and the skin: an update. J Am Acad Dermatol. 2008;58:545-570.
4. Minisini AM, Tosti A, Sobrero AF, et al. Taxane-induced nail changes: incidence, clinical presentation and outcome. Ann Oncol. 2003;14:333-337.
5. Can G, Aydiner A, Cavdar I. Taxane-induced nail changes: predictors and efficacy of the use of frozen gloves and socks in the prevention of nail toxicity. Eur J Oncol Nurs. 2012;16:270-275.
6. Lüftner D, Flath B, Akrivakis C, et al. Dose-intensified weekly paclitaxel induces multiple nail disorders. Ann Oncol. 1998;9:1139-1141.
7. Hussain S, Anderson DN, Salvatti ME, et al. Onycholysis as a complication of systemic chemotherapy. report of five cases associated with prolonged weekly paclitaxel therapy and review of the literature. Cancer. 2000;88:2367-2371.
8. Almagro M, Del Pozo J, Garcia-Silva J, et al. Nail alterations secondary to paclitaxel therapy. Eur J Dermatol. 2000;10:146-147.
9. Flory SM, Solimando DA Jr, Webster GF, et al. Onycholysis associated with weekly administration of paclitaxel. Ann Pharmacother. 1999;33:584-586.
10. Hinds G, Thomas VD. Malignancy and cancer treatment-related hair and nail changes. Dermatol Clin. 2008;26:59-68.
11. Gilbar P, Hain A, Peereboom VM. Nail toxicity induced by cancer chemotherapy. J Oncol Pharm Practice. 2009;15:143-55.
12. Buka R, Friedman KA, Phelps RG, et al. Childhood longitudinal melanonychia: case reports and review of the literature. Mt Sinai J Med. 2001;68:331-335.
13. Halvorson CR, Erickson CL, Gaspari AA. A rare manifestation of nail changes with docetaxel therapy. Skinmed. 2010;8:179-180.
14. Ferreira O, Baudrier T, Mota A, et al. Docetaxel-induced acral erythema and nail changes distributed to photoexposed areas. Cutan Ocul Toxicol. 2010;29:296-299.
15. Piraccini BM, Iorizzo M. Drug reactions affecting the nail unit: diagnosis and management. Dermatol Clin. 2007;25:215-221.
16. Piraccini BM, Tosti A. Drug-induced nail disorders: incidence, management and prognosis. Drug Saf. 1999;21:187-201.
17. Scotté F, Tourani JM, Banu E, et al. Multicenter study of a frozen glove to prevent docetaxel-induced onycholysis and cutaneous toxicity of the hand. J Clin Oncol. 2005;23:4424-4429.
18. Gilbar P, Hain A, Peereboom VM. Nail toxicity induced by cancer chemotherapy. J Oncol Pharm Pract. 2009;15:143-155.
19. Hackbarth M, Haas N, Fotopoulou C, et al. Chemotherapy-induced dermatological toxicity: frequencies and impact on quality of life in women’s cancers. results of a prospective study. Support Care Cancer. 2008;16:267-273.
1. Crown J, O’Leary M. The taxanes: an update. Lancet. 2000;356:507-508.
2. Schiff PB, Fant J, Horwitz SB. Promotion of microtubule assembly in vitro by Taxol. Nature. 1979;277:665-667.
3. Heidary N, Naik H, Burgin S. Chemotherapeutic agents and the skin: an update. J Am Acad Dermatol. 2008;58:545-570.
4. Minisini AM, Tosti A, Sobrero AF, et al. Taxane-induced nail changes: incidence, clinical presentation and outcome. Ann Oncol. 2003;14:333-337.
5. Can G, Aydiner A, Cavdar I. Taxane-induced nail changes: predictors and efficacy of the use of frozen gloves and socks in the prevention of nail toxicity. Eur J Oncol Nurs. 2012;16:270-275.
6. Lüftner D, Flath B, Akrivakis C, et al. Dose-intensified weekly paclitaxel induces multiple nail disorders. Ann Oncol. 1998;9:1139-1141.
7. Hussain S, Anderson DN, Salvatti ME, et al. Onycholysis as a complication of systemic chemotherapy. report of five cases associated with prolonged weekly paclitaxel therapy and review of the literature. Cancer. 2000;88:2367-2371.
8. Almagro M, Del Pozo J, Garcia-Silva J, et al. Nail alterations secondary to paclitaxel therapy. Eur J Dermatol. 2000;10:146-147.
9. Flory SM, Solimando DA Jr, Webster GF, et al. Onycholysis associated with weekly administration of paclitaxel. Ann Pharmacother. 1999;33:584-586.
10. Hinds G, Thomas VD. Malignancy and cancer treatment-related hair and nail changes. Dermatol Clin. 2008;26:59-68.
11. Gilbar P, Hain A, Peereboom VM. Nail toxicity induced by cancer chemotherapy. J Oncol Pharm Practice. 2009;15:143-55.
12. Buka R, Friedman KA, Phelps RG, et al. Childhood longitudinal melanonychia: case reports and review of the literature. Mt Sinai J Med. 2001;68:331-335.
13. Halvorson CR, Erickson CL, Gaspari AA. A rare manifestation of nail changes with docetaxel therapy. Skinmed. 2010;8:179-180.
14. Ferreira O, Baudrier T, Mota A, et al. Docetaxel-induced acral erythema and nail changes distributed to photoexposed areas. Cutan Ocul Toxicol. 2010;29:296-299.
15. Piraccini BM, Iorizzo M. Drug reactions affecting the nail unit: diagnosis and management. Dermatol Clin. 2007;25:215-221.
16. Piraccini BM, Tosti A. Drug-induced nail disorders: incidence, management and prognosis. Drug Saf. 1999;21:187-201.
17. Scotté F, Tourani JM, Banu E, et al. Multicenter study of a frozen glove to prevent docetaxel-induced onycholysis and cutaneous toxicity of the hand. J Clin Oncol. 2005;23:4424-4429.
18. Gilbar P, Hain A, Peereboom VM. Nail toxicity induced by cancer chemotherapy. J Oncol Pharm Pract. 2009;15:143-155.
19. Hackbarth M, Haas N, Fotopoulou C, et al. Chemotherapy-induced dermatological toxicity: frequencies and impact on quality of life in women’s cancers. results of a prospective study. Support Care Cancer. 2008;16:267-273.
Verrucous Kaposi Sarcoma in an HIV-Positive Man
To the Editor:
Verrucous Kaposi sarcoma (VKS) is an uncommon variant of Kaposi sarcoma (KS) that rarely is seen in clinical practice or reported in the literature. It is strongly associated with lymphedema in patients with AIDS.1 We present a case of VKS in a human immunodeficiency virus (HIV)–positive man with cutaneous lesions that demonstrated minimal response to treatment with efavirenz-emtricitabine-tenofovir, doxorubicin, paclitaxel, and alitretinoin.
A 48-year-old man with a history of untreated HIV presented with a persistent eruption of heavily scaled, hyperpigmented, nonindurated, thin plaques in an ichthyosiform pattern on the bilateral lower legs and ankles of 4 years’ duration (Figure 1). He also had a number of soft, compressible, cystlike plaques without much overlying epidermal change on the lower extremities. He denied any prior episodes of skin breakdown, drainage, or secondary infection. Findings from the physical examination were otherwise unremarkable.
Two punch biopsies were performed on the lower legs, one from a scaly plaque and the other from a cystic area. The epidermis was hyperkeratotic and mildly hyperplastic with slitlike vascular spaces. A dense cellular proliferation of spindle-shaped cells was present in the dermis (Figure 2). Minimal cytologic atypia was noted. Immunohistochemical staining for human herpesvirus 8 (HHV-8) was strongly positive (Figure 3). Histologically, the cutaneous lesions were consistent with VKS.
At the current presentation, the CD4 count was 355 cells/mm3 and the viral load was 919,223 copies/mL. The CD4 count and viral load initially had been responsive to efavirenz-emtricitabine-tenofovir therapy; 17 months prior to the current presentation, the CD4 count was 692 cells/mm3 and the viral load was less than 50 copies/mL. However, the cutaneous lesions persisted despite therapy with efavirenz-emtricitabine-tenofovir, alitretinoin gel, and intralesional chemotherapeutic agents such as doxorubicin and paclitaxel.
Kaposi sarcoma, first described by Moritz Kaposi in 1872, represents a group of vascular neoplasms. Multiple subtypes have been described including classic, African endemic, transplant/AIDS associated, anaplastic, lymphedematous, hyperkeratotic/verrucous, keloidal, micronodular, pyogenic granulomalike, ecchymotic, and intravascular.1-3 Human herpesvirus 8 is associated with all clinical subtypes of KS.3 Immunohistochemical staining for HHV-8 latent nuclear antigen-1 has been shown in the literature to be highly sensitive and specific for KS and can potentially facilitate the diagnosis of KS among patients with similarly appearing dermatologic conditions, such as angiosarcoma, kaposiform hemangioendothelioma, or verrucous hemangioma.1,4 Human herpesvirus 8 infects endothelial cells and induces the proliferation of vascular spindle cells via the secretion of basic fibroblast growth factor and vascular endothelial growth factor.5 Human herpesvirus 8 also can lead to lymph vessel obstruction and lymph node enlargement by infecting cells within the lymphatic system. In addition, chronic lymphedema can itself lead to verruciform epidermal hyperplasia and hyperkeratosis, which has a clinical presentation similar to VKS.1
AIDS-associated KS typically starts as 1 or more purple-red macules that rapidly progress into papules, nodules, and plaques.1 These lesions have a predilection for the head, neck, trunk, and mucous membranes. Albeit a rare presentation, VKS is strongly associated with lymphedema in patients with AIDS.1,3,5 Previously, KS was often the presenting clinical manifestation of HIV infection, but since the use of highly active antiretroviral therapy (HAART) has become the standard of care, the incidence as well as the morbidity and mortality associated with KS has substantially decreased.1,5-7 Notably, in HIV patients who initially do not have signs or symptoms of KS, HHV-8 positivity is predictive of the development of KS within 2 to 4 years.6
In the literature, good prognostic indicators for KS include CD4 count greater than 150 cells/mm3, only cutaneous involvement, and negative B symptoms (eg, temperature >38°C, night sweats, unintentional weight loss >10% of normal body weight within 6 months).7 Kaposi sarcoma cannot be completely cured but can be appropriately managed with medical intervention. All KS subtypes are sensitive to radiation therapy; recalcitrant localized lesions can be treated with excision, cryotherapy, alitretinoin gel, laser ablation, or locally injected interferon or chemotherapeutic agents (eg, vincristine, vinblastine, actinomycin D).5,6 Liposomal anthracyclines (doxorubicin) and paclitaxel are first- and second-line agents for advanced KS, respectively.6
In HIV-associated KS, lesions frequently involute with the initiation of HAART; however, the cutaneous lesions in our patient persisted despite initiation of efavirenz-emtricitabine-tenofovir. He also was given intralesional doxorubicin andpaclitaxel as well as topical alitretinoin but did not experience complete resolution of the cutaneous lesions. It is possible that patients with VKS are recalcitrant to typical treatment modalities and therefore may require unconventional therapies to achieve maximal clearance of cutaneous lesions.
Verrucous Kaposi sarcoma is a rare presentation of KS that is infrequently seen in clinical practice or reported in the literature.3 A PubMed search of articles indexed for MEDLINE using the search term verrucous Kaposi sarcoma yielded 13 articles, one of which included a case series of 5 patients with AIDS and hyperkeratotic KS in Germany in the 1990s.5 Four of the articles were written in French, German, or Portuguese.8-11 The remainder of the articles discussed variants of KS other than VKS.
Although most patients with HIV and KS effectively respond to HAART, it may be possible that VKS is more difficult to treat. In addition, immunohistochemical staining for HHV-8, in particular HHV-8 latent nuclear antigen-1, may be useful to diagnose KS in HIV patients with uncharacteristic or indeterminate cutaneous lesions. Further research is needed to identify and delineate various efficacious therapeutic options for recalcitrant KS, particularly VKS.
Acknowledgment
We are indebted to Antoinette F. Hood, MD, Norfolk, Virginia, who digitized our patient’s histopathology slides.
1. Grayson W, Pantanowitz L. Histological variants of cutaneous Kaposi sarcoma. Diagn Pathol. 2008;3:31.
2. Amodio E, Goedert JJ, Barozzi P, et al. Differences in Kaposi sarcoma-associated herpesvirus-specific and herpesvirus-non-specific immune responses in classic Kaposi sarcoma cases and matched controls in Sicily. Cancer Sci. 2011;102:1769-1773.
3. Fagone S, Cavaleri A, Camuto M, et al. Hyperkeratotic Kaposi sarcoma with leg lymphedema after prolonged corticosteroid therapy for SLE. case report and review of the literature. Minerva Med. 2001;92:177-202.
4. Cheuk W, Wong KO, Wong CS, et al. Immunostaining for human herpesvirus 8 latent nuclear antigen-1 helps distinguish Kaposi sarcoma from its mimickers. Am J Clin Pathol. 2004;121:335-342.
5. Hengge UR, Stocks K, Goos M. Acquired immune deficiency syndrome-related hyperkeratotic Kaposi’s sarcoma with severe lymphedema: report of 5 cases. Br J Dermatol. 2000;142:501-505.
6. James WD, Berger TG, Elston DM, eds. Andrews’ Diseases of the Skin: Clinical Dermatology. 10th ed. Philadelphia, PA: WB Saunders; 2006.
7. Thomas S, Sindhu CB, Sreekumar S, et al. AIDS associated Kaposi’s Sarcoma. J Assoc Physicians India. 2011;59:387-389.
8. Mukai MM, Chaves T, Caldas L, et al. Primary Kaposi’s sarcoma of the penis [in Portuguese]. An Bras Dermatol. 2009;84:524-526.
9. Weidauer H, Tilgen W, Adler D. Kaposi’s sarcoma of the larynx [in German]. Laryngol Rhinol Otol (Stuttg). 1986;65:389-391.
10. Basset A. Clinical aspects of Kaposi’s disease [in French]. Bull Soc Pathol Exot Filiales. 1984;77(4, pt 2):529-532.
11. Wlotzke U, Hohenleutner U, Landthaler M. Dermatoses in leg amputees [in German]. Hautarzt. 1996;47:493-501.
To the Editor:
Verrucous Kaposi sarcoma (VKS) is an uncommon variant of Kaposi sarcoma (KS) that rarely is seen in clinical practice or reported in the literature. It is strongly associated with lymphedema in patients with AIDS.1 We present a case of VKS in a human immunodeficiency virus (HIV)–positive man with cutaneous lesions that demonstrated minimal response to treatment with efavirenz-emtricitabine-tenofovir, doxorubicin, paclitaxel, and alitretinoin.
A 48-year-old man with a history of untreated HIV presented with a persistent eruption of heavily scaled, hyperpigmented, nonindurated, thin plaques in an ichthyosiform pattern on the bilateral lower legs and ankles of 4 years’ duration (Figure 1). He also had a number of soft, compressible, cystlike plaques without much overlying epidermal change on the lower extremities. He denied any prior episodes of skin breakdown, drainage, or secondary infection. Findings from the physical examination were otherwise unremarkable.
Two punch biopsies were performed on the lower legs, one from a scaly plaque and the other from a cystic area. The epidermis was hyperkeratotic and mildly hyperplastic with slitlike vascular spaces. A dense cellular proliferation of spindle-shaped cells was present in the dermis (Figure 2). Minimal cytologic atypia was noted. Immunohistochemical staining for human herpesvirus 8 (HHV-8) was strongly positive (Figure 3). Histologically, the cutaneous lesions were consistent with VKS.
At the current presentation, the CD4 count was 355 cells/mm3 and the viral load was 919,223 copies/mL. The CD4 count and viral load initially had been responsive to efavirenz-emtricitabine-tenofovir therapy; 17 months prior to the current presentation, the CD4 count was 692 cells/mm3 and the viral load was less than 50 copies/mL. However, the cutaneous lesions persisted despite therapy with efavirenz-emtricitabine-tenofovir, alitretinoin gel, and intralesional chemotherapeutic agents such as doxorubicin and paclitaxel.
Kaposi sarcoma, first described by Moritz Kaposi in 1872, represents a group of vascular neoplasms. Multiple subtypes have been described including classic, African endemic, transplant/AIDS associated, anaplastic, lymphedematous, hyperkeratotic/verrucous, keloidal, micronodular, pyogenic granulomalike, ecchymotic, and intravascular.1-3 Human herpesvirus 8 is associated with all clinical subtypes of KS.3 Immunohistochemical staining for HHV-8 latent nuclear antigen-1 has been shown in the literature to be highly sensitive and specific for KS and can potentially facilitate the diagnosis of KS among patients with similarly appearing dermatologic conditions, such as angiosarcoma, kaposiform hemangioendothelioma, or verrucous hemangioma.1,4 Human herpesvirus 8 infects endothelial cells and induces the proliferation of vascular spindle cells via the secretion of basic fibroblast growth factor and vascular endothelial growth factor.5 Human herpesvirus 8 also can lead to lymph vessel obstruction and lymph node enlargement by infecting cells within the lymphatic system. In addition, chronic lymphedema can itself lead to verruciform epidermal hyperplasia and hyperkeratosis, which has a clinical presentation similar to VKS.1
AIDS-associated KS typically starts as 1 or more purple-red macules that rapidly progress into papules, nodules, and plaques.1 These lesions have a predilection for the head, neck, trunk, and mucous membranes. Albeit a rare presentation, VKS is strongly associated with lymphedema in patients with AIDS.1,3,5 Previously, KS was often the presenting clinical manifestation of HIV infection, but since the use of highly active antiretroviral therapy (HAART) has become the standard of care, the incidence as well as the morbidity and mortality associated with KS has substantially decreased.1,5-7 Notably, in HIV patients who initially do not have signs or symptoms of KS, HHV-8 positivity is predictive of the development of KS within 2 to 4 years.6
In the literature, good prognostic indicators for KS include CD4 count greater than 150 cells/mm3, only cutaneous involvement, and negative B symptoms (eg, temperature >38°C, night sweats, unintentional weight loss >10% of normal body weight within 6 months).7 Kaposi sarcoma cannot be completely cured but can be appropriately managed with medical intervention. All KS subtypes are sensitive to radiation therapy; recalcitrant localized lesions can be treated with excision, cryotherapy, alitretinoin gel, laser ablation, or locally injected interferon or chemotherapeutic agents (eg, vincristine, vinblastine, actinomycin D).5,6 Liposomal anthracyclines (doxorubicin) and paclitaxel are first- and second-line agents for advanced KS, respectively.6
In HIV-associated KS, lesions frequently involute with the initiation of HAART; however, the cutaneous lesions in our patient persisted despite initiation of efavirenz-emtricitabine-tenofovir. He also was given intralesional doxorubicin andpaclitaxel as well as topical alitretinoin but did not experience complete resolution of the cutaneous lesions. It is possible that patients with VKS are recalcitrant to typical treatment modalities and therefore may require unconventional therapies to achieve maximal clearance of cutaneous lesions.
Verrucous Kaposi sarcoma is a rare presentation of KS that is infrequently seen in clinical practice or reported in the literature.3 A PubMed search of articles indexed for MEDLINE using the search term verrucous Kaposi sarcoma yielded 13 articles, one of which included a case series of 5 patients with AIDS and hyperkeratotic KS in Germany in the 1990s.5 Four of the articles were written in French, German, or Portuguese.8-11 The remainder of the articles discussed variants of KS other than VKS.
Although most patients with HIV and KS effectively respond to HAART, it may be possible that VKS is more difficult to treat. In addition, immunohistochemical staining for HHV-8, in particular HHV-8 latent nuclear antigen-1, may be useful to diagnose KS in HIV patients with uncharacteristic or indeterminate cutaneous lesions. Further research is needed to identify and delineate various efficacious therapeutic options for recalcitrant KS, particularly VKS.
Acknowledgment
We are indebted to Antoinette F. Hood, MD, Norfolk, Virginia, who digitized our patient’s histopathology slides.
To the Editor:
Verrucous Kaposi sarcoma (VKS) is an uncommon variant of Kaposi sarcoma (KS) that rarely is seen in clinical practice or reported in the literature. It is strongly associated with lymphedema in patients with AIDS.1 We present a case of VKS in a human immunodeficiency virus (HIV)–positive man with cutaneous lesions that demonstrated minimal response to treatment with efavirenz-emtricitabine-tenofovir, doxorubicin, paclitaxel, and alitretinoin.
A 48-year-old man with a history of untreated HIV presented with a persistent eruption of heavily scaled, hyperpigmented, nonindurated, thin plaques in an ichthyosiform pattern on the bilateral lower legs and ankles of 4 years’ duration (Figure 1). He also had a number of soft, compressible, cystlike plaques without much overlying epidermal change on the lower extremities. He denied any prior episodes of skin breakdown, drainage, or secondary infection. Findings from the physical examination were otherwise unremarkable.
Two punch biopsies were performed on the lower legs, one from a scaly plaque and the other from a cystic area. The epidermis was hyperkeratotic and mildly hyperplastic with slitlike vascular spaces. A dense cellular proliferation of spindle-shaped cells was present in the dermis (Figure 2). Minimal cytologic atypia was noted. Immunohistochemical staining for human herpesvirus 8 (HHV-8) was strongly positive (Figure 3). Histologically, the cutaneous lesions were consistent with VKS.
At the current presentation, the CD4 count was 355 cells/mm3 and the viral load was 919,223 copies/mL. The CD4 count and viral load initially had been responsive to efavirenz-emtricitabine-tenofovir therapy; 17 months prior to the current presentation, the CD4 count was 692 cells/mm3 and the viral load was less than 50 copies/mL. However, the cutaneous lesions persisted despite therapy with efavirenz-emtricitabine-tenofovir, alitretinoin gel, and intralesional chemotherapeutic agents such as doxorubicin and paclitaxel.
Kaposi sarcoma, first described by Moritz Kaposi in 1872, represents a group of vascular neoplasms. Multiple subtypes have been described including classic, African endemic, transplant/AIDS associated, anaplastic, lymphedematous, hyperkeratotic/verrucous, keloidal, micronodular, pyogenic granulomalike, ecchymotic, and intravascular.1-3 Human herpesvirus 8 is associated with all clinical subtypes of KS.3 Immunohistochemical staining for HHV-8 latent nuclear antigen-1 has been shown in the literature to be highly sensitive and specific for KS and can potentially facilitate the diagnosis of KS among patients with similarly appearing dermatologic conditions, such as angiosarcoma, kaposiform hemangioendothelioma, or verrucous hemangioma.1,4 Human herpesvirus 8 infects endothelial cells and induces the proliferation of vascular spindle cells via the secretion of basic fibroblast growth factor and vascular endothelial growth factor.5 Human herpesvirus 8 also can lead to lymph vessel obstruction and lymph node enlargement by infecting cells within the lymphatic system. In addition, chronic lymphedema can itself lead to verruciform epidermal hyperplasia and hyperkeratosis, which has a clinical presentation similar to VKS.1
AIDS-associated KS typically starts as 1 or more purple-red macules that rapidly progress into papules, nodules, and plaques.1 These lesions have a predilection for the head, neck, trunk, and mucous membranes. Albeit a rare presentation, VKS is strongly associated with lymphedema in patients with AIDS.1,3,5 Previously, KS was often the presenting clinical manifestation of HIV infection, but since the use of highly active antiretroviral therapy (HAART) has become the standard of care, the incidence as well as the morbidity and mortality associated with KS has substantially decreased.1,5-7 Notably, in HIV patients who initially do not have signs or symptoms of KS, HHV-8 positivity is predictive of the development of KS within 2 to 4 years.6
In the literature, good prognostic indicators for KS include CD4 count greater than 150 cells/mm3, only cutaneous involvement, and negative B symptoms (eg, temperature >38°C, night sweats, unintentional weight loss >10% of normal body weight within 6 months).7 Kaposi sarcoma cannot be completely cured but can be appropriately managed with medical intervention. All KS subtypes are sensitive to radiation therapy; recalcitrant localized lesions can be treated with excision, cryotherapy, alitretinoin gel, laser ablation, or locally injected interferon or chemotherapeutic agents (eg, vincristine, vinblastine, actinomycin D).5,6 Liposomal anthracyclines (doxorubicin) and paclitaxel are first- and second-line agents for advanced KS, respectively.6
In HIV-associated KS, lesions frequently involute with the initiation of HAART; however, the cutaneous lesions in our patient persisted despite initiation of efavirenz-emtricitabine-tenofovir. He also was given intralesional doxorubicin andpaclitaxel as well as topical alitretinoin but did not experience complete resolution of the cutaneous lesions. It is possible that patients with VKS are recalcitrant to typical treatment modalities and therefore may require unconventional therapies to achieve maximal clearance of cutaneous lesions.
Verrucous Kaposi sarcoma is a rare presentation of KS that is infrequently seen in clinical practice or reported in the literature.3 A PubMed search of articles indexed for MEDLINE using the search term verrucous Kaposi sarcoma yielded 13 articles, one of which included a case series of 5 patients with AIDS and hyperkeratotic KS in Germany in the 1990s.5 Four of the articles were written in French, German, or Portuguese.8-11 The remainder of the articles discussed variants of KS other than VKS.
Although most patients with HIV and KS effectively respond to HAART, it may be possible that VKS is more difficult to treat. In addition, immunohistochemical staining for HHV-8, in particular HHV-8 latent nuclear antigen-1, may be useful to diagnose KS in HIV patients with uncharacteristic or indeterminate cutaneous lesions. Further research is needed to identify and delineate various efficacious therapeutic options for recalcitrant KS, particularly VKS.
Acknowledgment
We are indebted to Antoinette F. Hood, MD, Norfolk, Virginia, who digitized our patient’s histopathology slides.
1. Grayson W, Pantanowitz L. Histological variants of cutaneous Kaposi sarcoma. Diagn Pathol. 2008;3:31.
2. Amodio E, Goedert JJ, Barozzi P, et al. Differences in Kaposi sarcoma-associated herpesvirus-specific and herpesvirus-non-specific immune responses in classic Kaposi sarcoma cases and matched controls in Sicily. Cancer Sci. 2011;102:1769-1773.
3. Fagone S, Cavaleri A, Camuto M, et al. Hyperkeratotic Kaposi sarcoma with leg lymphedema after prolonged corticosteroid therapy for SLE. case report and review of the literature. Minerva Med. 2001;92:177-202.
4. Cheuk W, Wong KO, Wong CS, et al. Immunostaining for human herpesvirus 8 latent nuclear antigen-1 helps distinguish Kaposi sarcoma from its mimickers. Am J Clin Pathol. 2004;121:335-342.
5. Hengge UR, Stocks K, Goos M. Acquired immune deficiency syndrome-related hyperkeratotic Kaposi’s sarcoma with severe lymphedema: report of 5 cases. Br J Dermatol. 2000;142:501-505.
6. James WD, Berger TG, Elston DM, eds. Andrews’ Diseases of the Skin: Clinical Dermatology. 10th ed. Philadelphia, PA: WB Saunders; 2006.
7. Thomas S, Sindhu CB, Sreekumar S, et al. AIDS associated Kaposi’s Sarcoma. J Assoc Physicians India. 2011;59:387-389.
8. Mukai MM, Chaves T, Caldas L, et al. Primary Kaposi’s sarcoma of the penis [in Portuguese]. An Bras Dermatol. 2009;84:524-526.
9. Weidauer H, Tilgen W, Adler D. Kaposi’s sarcoma of the larynx [in German]. Laryngol Rhinol Otol (Stuttg). 1986;65:389-391.
10. Basset A. Clinical aspects of Kaposi’s disease [in French]. Bull Soc Pathol Exot Filiales. 1984;77(4, pt 2):529-532.
11. Wlotzke U, Hohenleutner U, Landthaler M. Dermatoses in leg amputees [in German]. Hautarzt. 1996;47:493-501.
1. Grayson W, Pantanowitz L. Histological variants of cutaneous Kaposi sarcoma. Diagn Pathol. 2008;3:31.
2. Amodio E, Goedert JJ, Barozzi P, et al. Differences in Kaposi sarcoma-associated herpesvirus-specific and herpesvirus-non-specific immune responses in classic Kaposi sarcoma cases and matched controls in Sicily. Cancer Sci. 2011;102:1769-1773.
3. Fagone S, Cavaleri A, Camuto M, et al. Hyperkeratotic Kaposi sarcoma with leg lymphedema after prolonged corticosteroid therapy for SLE. case report and review of the literature. Minerva Med. 2001;92:177-202.
4. Cheuk W, Wong KO, Wong CS, et al. Immunostaining for human herpesvirus 8 latent nuclear antigen-1 helps distinguish Kaposi sarcoma from its mimickers. Am J Clin Pathol. 2004;121:335-342.
5. Hengge UR, Stocks K, Goos M. Acquired immune deficiency syndrome-related hyperkeratotic Kaposi’s sarcoma with severe lymphedema: report of 5 cases. Br J Dermatol. 2000;142:501-505.
6. James WD, Berger TG, Elston DM, eds. Andrews’ Diseases of the Skin: Clinical Dermatology. 10th ed. Philadelphia, PA: WB Saunders; 2006.
7. Thomas S, Sindhu CB, Sreekumar S, et al. AIDS associated Kaposi’s Sarcoma. J Assoc Physicians India. 2011;59:387-389.
8. Mukai MM, Chaves T, Caldas L, et al. Primary Kaposi’s sarcoma of the penis [in Portuguese]. An Bras Dermatol. 2009;84:524-526.
9. Weidauer H, Tilgen W, Adler D. Kaposi’s sarcoma of the larynx [in German]. Laryngol Rhinol Otol (Stuttg). 1986;65:389-391.
10. Basset A. Clinical aspects of Kaposi’s disease [in French]. Bull Soc Pathol Exot Filiales. 1984;77(4, pt 2):529-532.
11. Wlotzke U, Hohenleutner U, Landthaler M. Dermatoses in leg amputees [in German]. Hautarzt. 1996;47:493-501.
A disturbing conversation with another health care provider
One of my pet peeves is when a patient or colleague speaks ill of another health care provider. I find it unbecoming behavior that often (though not always) speaks more to the character of the speaker than that of the object of anger/derision/dissatisfaction. I recently had the misfortune of interacting with a nurse practitioner who behaved in this manner. (The evidence of my hypocrisy does not escape me.)
A patient had been having some vague complaints for about 5 years, including myalgias, headaches, and fatigue. She remembers a tick bite that preceded the onset of symptoms. She tested negative for Lyme disease and other tick-borne illnesses multiple times, but after seeing many different doctors she finally saw an infectious disease doctor who often treats patients for what he diagnoses as a chronic Lyme infection. The patient was on antibiotics for about 5 years. But because she didn’t really feel any better, she started questioning the diagnosis.
I explained to the patient why I thought that fibromyalgia might explain her symptoms. She looked this up on the Internet and found that the disease described her symptoms completely. She was happy to stop antibiotic treatment. However, in the interest of leaving no stone unturned, I referred her to a neurologist for her headaches.
The nurse practitioner who evaluated her sent her for a brain single-photon emission computed tomography scan that showed “multifocal regions of decreased uptake, distribution suggestive of vasculitis or multi-infarct dementia.” The NP then informed the patient of this result, said it was consistent with CNS Lyme, and asked her to return to the infectious disease doctor who then put her back on oral antibiotics.
The patient brought this all to my attention, asking for an opinion. I thought she probably had small vessel changes because she had hyperlipidemia and was a heavy smoker. But I was curious about the decision to label this as CNS Lyme, so I thought I would touch base with the NP. What ensued was possibly one of the most disturbing conversations I’ve had with another health care provider since I started practice.
She didn’t think she needed a lumbar puncture to confirm her diagnosis. She hadn’t bothered to order Lyme serologies or to look for previous results. “We take the patient’s word for it,” she smugly told me. She had full confidence that her diagnosis was correct, because “we see this all the time.” When I said I thought, common things being common, that the cigarette smoking was the most likely culprit for the changes, her response was: “Common things being common, Lyme disease is pretty common around here.” On the question of why the patient was getting oral antibiotics rather than IV antibiotics per Infectious Diseases Society of America guidelines for CNS Lyme, the response I got was again, that she sees this “all the time, and they do respond to oral antibiotics.”
I think the worst part was that when I pointed out that the preponderance of other doctors (two primary care physicians, two infectious disease doctors, another neurologist, another rheumatologist, and myself) did not agree with the diagnosis, her reply was to say that “the ID docs around here are way too conservative when it comes to treating chronic Lyme.”
Of course, she could very well be correct in her diagnosis. However, the conceit with which she so readily accused the ID specialists of being “too conservative” when she clearly did not do the necessary work herself (LP, serologies, etc.) just rubs me the wrong way. Lazy and arrogant make a horrible combination.
I politely disagreed and ended the conversation, but I was so worked up about the situation that I decided to write about it, thereby demonstrating the same bad behavior I claim to dislike. I am afraid at this stage in my professional development magnanimity is not a quality that I yet possess. Hopefully, I will not have many opportunities to demonstrate my lack of it.
Dr. Chan practices rheumatology in Pawtucket, R.I.
One of my pet peeves is when a patient or colleague speaks ill of another health care provider. I find it unbecoming behavior that often (though not always) speaks more to the character of the speaker than that of the object of anger/derision/dissatisfaction. I recently had the misfortune of interacting with a nurse practitioner who behaved in this manner. (The evidence of my hypocrisy does not escape me.)
A patient had been having some vague complaints for about 5 years, including myalgias, headaches, and fatigue. She remembers a tick bite that preceded the onset of symptoms. She tested negative for Lyme disease and other tick-borne illnesses multiple times, but after seeing many different doctors she finally saw an infectious disease doctor who often treats patients for what he diagnoses as a chronic Lyme infection. The patient was on antibiotics for about 5 years. But because she didn’t really feel any better, she started questioning the diagnosis.
I explained to the patient why I thought that fibromyalgia might explain her symptoms. She looked this up on the Internet and found that the disease described her symptoms completely. She was happy to stop antibiotic treatment. However, in the interest of leaving no stone unturned, I referred her to a neurologist for her headaches.
The nurse practitioner who evaluated her sent her for a brain single-photon emission computed tomography scan that showed “multifocal regions of decreased uptake, distribution suggestive of vasculitis or multi-infarct dementia.” The NP then informed the patient of this result, said it was consistent with CNS Lyme, and asked her to return to the infectious disease doctor who then put her back on oral antibiotics.
The patient brought this all to my attention, asking for an opinion. I thought she probably had small vessel changes because she had hyperlipidemia and was a heavy smoker. But I was curious about the decision to label this as CNS Lyme, so I thought I would touch base with the NP. What ensued was possibly one of the most disturbing conversations I’ve had with another health care provider since I started practice.
She didn’t think she needed a lumbar puncture to confirm her diagnosis. She hadn’t bothered to order Lyme serologies or to look for previous results. “We take the patient’s word for it,” she smugly told me. She had full confidence that her diagnosis was correct, because “we see this all the time.” When I said I thought, common things being common, that the cigarette smoking was the most likely culprit for the changes, her response was: “Common things being common, Lyme disease is pretty common around here.” On the question of why the patient was getting oral antibiotics rather than IV antibiotics per Infectious Diseases Society of America guidelines for CNS Lyme, the response I got was again, that she sees this “all the time, and they do respond to oral antibiotics.”
I think the worst part was that when I pointed out that the preponderance of other doctors (two primary care physicians, two infectious disease doctors, another neurologist, another rheumatologist, and myself) did not agree with the diagnosis, her reply was to say that “the ID docs around here are way too conservative when it comes to treating chronic Lyme.”
Of course, she could very well be correct in her diagnosis. However, the conceit with which she so readily accused the ID specialists of being “too conservative” when she clearly did not do the necessary work herself (LP, serologies, etc.) just rubs me the wrong way. Lazy and arrogant make a horrible combination.
I politely disagreed and ended the conversation, but I was so worked up about the situation that I decided to write about it, thereby demonstrating the same bad behavior I claim to dislike. I am afraid at this stage in my professional development magnanimity is not a quality that I yet possess. Hopefully, I will not have many opportunities to demonstrate my lack of it.
Dr. Chan practices rheumatology in Pawtucket, R.I.
One of my pet peeves is when a patient or colleague speaks ill of another health care provider. I find it unbecoming behavior that often (though not always) speaks more to the character of the speaker than that of the object of anger/derision/dissatisfaction. I recently had the misfortune of interacting with a nurse practitioner who behaved in this manner. (The evidence of my hypocrisy does not escape me.)
A patient had been having some vague complaints for about 5 years, including myalgias, headaches, and fatigue. She remembers a tick bite that preceded the onset of symptoms. She tested negative for Lyme disease and other tick-borne illnesses multiple times, but after seeing many different doctors she finally saw an infectious disease doctor who often treats patients for what he diagnoses as a chronic Lyme infection. The patient was on antibiotics for about 5 years. But because she didn’t really feel any better, she started questioning the diagnosis.
I explained to the patient why I thought that fibromyalgia might explain her symptoms. She looked this up on the Internet and found that the disease described her symptoms completely. She was happy to stop antibiotic treatment. However, in the interest of leaving no stone unturned, I referred her to a neurologist for her headaches.
The nurse practitioner who evaluated her sent her for a brain single-photon emission computed tomography scan that showed “multifocal regions of decreased uptake, distribution suggestive of vasculitis or multi-infarct dementia.” The NP then informed the patient of this result, said it was consistent with CNS Lyme, and asked her to return to the infectious disease doctor who then put her back on oral antibiotics.
The patient brought this all to my attention, asking for an opinion. I thought she probably had small vessel changes because she had hyperlipidemia and was a heavy smoker. But I was curious about the decision to label this as CNS Lyme, so I thought I would touch base with the NP. What ensued was possibly one of the most disturbing conversations I’ve had with another health care provider since I started practice.
She didn’t think she needed a lumbar puncture to confirm her diagnosis. She hadn’t bothered to order Lyme serologies or to look for previous results. “We take the patient’s word for it,” she smugly told me. She had full confidence that her diagnosis was correct, because “we see this all the time.” When I said I thought, common things being common, that the cigarette smoking was the most likely culprit for the changes, her response was: “Common things being common, Lyme disease is pretty common around here.” On the question of why the patient was getting oral antibiotics rather than IV antibiotics per Infectious Diseases Society of America guidelines for CNS Lyme, the response I got was again, that she sees this “all the time, and they do respond to oral antibiotics.”
I think the worst part was that when I pointed out that the preponderance of other doctors (two primary care physicians, two infectious disease doctors, another neurologist, another rheumatologist, and myself) did not agree with the diagnosis, her reply was to say that “the ID docs around here are way too conservative when it comes to treating chronic Lyme.”
Of course, she could very well be correct in her diagnosis. However, the conceit with which she so readily accused the ID specialists of being “too conservative” when she clearly did not do the necessary work herself (LP, serologies, etc.) just rubs me the wrong way. Lazy and arrogant make a horrible combination.
I politely disagreed and ended the conversation, but I was so worked up about the situation that I decided to write about it, thereby demonstrating the same bad behavior I claim to dislike. I am afraid at this stage in my professional development magnanimity is not a quality that I yet possess. Hopefully, I will not have many opportunities to demonstrate my lack of it.
Dr. Chan practices rheumatology in Pawtucket, R.I.