Protocol could improve massive blood transfusion

Article Type
Changed
Tue, 11/03/2015 - 06:00
Display Headline
Protocol could improve massive blood transfusion

Fresh frozen plasma

An “early and aggressive” approach to massive blood transfusion can save lives in military combat zones and may provide the same benefit in civilian trauma care as well, according to an article published in the AANA Journal.

The article describes 2 patients who required massive transfusions due to multiple gunshot wounds sustained while in combat zones.

One patient received an inadequate amount of blood products and ultimately died.

But the other patient benefitted from a protocol change to ensure an adequate amount of blood products was delivered quickly.

David Gaskin, CRNA, of Huntsville Memorial Hospital in Texas, and his colleagues described these cases in the journal.

The authors noted that, while providing care in a combat zone, the transfusion of packed red blood cells (PRBC) and fresh frozen plasma (FFP) is performed in a 1:1 ratio. However, the packaging and thawing techniques of the plasma can delay the delivery of blood products and prevent a patient from receiving enough blood.

Another issue in a military environment is the challenge of effectively communicating with live donors on site, which can cause delays in obtaining fresh blood supplies. Both of these issues can have life-threatening consequences for patients.

This is what happened with the first patient described in the article. The 38-year-old man sustained multiple gunshot wounds to the left side of the chest, left side of the back, and flank.

The surgical team was unable to maintain a high ratio of PRBCs to plasma and to infuse an adequate quantity of fresh whole blood (FWB) into this patient. He received 26 units of PRBCs, 5 units of FFP, 3 units of FWB, and 1 unit of cryoprecipitate.

The patient experienced trauma-induced coagulopathy, acidosis, and hypothermia. He died within 2 hours of presentation.

Because of this death, the team identified and implemented a protocol to keep 4 FFP units thawed and ready for immediate use at all times. They also identified and prescreened additional blood donors and implemented a phone roster and base-wide overhead system to enable rapid notification of these donors.

The second patient described in the article benefitted from these changes. This 23-year-old male sustained a gunshot wound to the left lower aspect of the abdomen and multiple gunshot wounds to bilateral lower extremities.

The “early and aggressive” use of FWB and plasma provided the necessary endogenous clotting factors and platelets to promote hemostasis in this patient. He received 18 units of PRBCs, 18 units of FFP, 2 units of cryoprecipitate, and 24 units of FWB.

Gaskin and his colleagues said these results suggest that efforts to incorporate a similar resuscitation strategy into civilian practice may improve outcomes, but it warrants continued study.

Publications
Topics

Fresh frozen plasma

An “early and aggressive” approach to massive blood transfusion can save lives in military combat zones and may provide the same benefit in civilian trauma care as well, according to an article published in the AANA Journal.

The article describes 2 patients who required massive transfusions due to multiple gunshot wounds sustained while in combat zones.

One patient received an inadequate amount of blood products and ultimately died.

But the other patient benefitted from a protocol change to ensure an adequate amount of blood products was delivered quickly.

David Gaskin, CRNA, of Huntsville Memorial Hospital in Texas, and his colleagues described these cases in the journal.

The authors noted that, while providing care in a combat zone, the transfusion of packed red blood cells (PRBC) and fresh frozen plasma (FFP) is performed in a 1:1 ratio. However, the packaging and thawing techniques of the plasma can delay the delivery of blood products and prevent a patient from receiving enough blood.

Another issue in a military environment is the challenge of effectively communicating with live donors on site, which can cause delays in obtaining fresh blood supplies. Both of these issues can have life-threatening consequences for patients.

This is what happened with the first patient described in the article. The 38-year-old man sustained multiple gunshot wounds to the left side of the chest, left side of the back, and flank.

The surgical team was unable to maintain a high ratio of PRBCs to plasma and to infuse an adequate quantity of fresh whole blood (FWB) into this patient. He received 26 units of PRBCs, 5 units of FFP, 3 units of FWB, and 1 unit of cryoprecipitate.

The patient experienced trauma-induced coagulopathy, acidosis, and hypothermia. He died within 2 hours of presentation.

Because of this death, the team identified and implemented a protocol to keep 4 FFP units thawed and ready for immediate use at all times. They also identified and prescreened additional blood donors and implemented a phone roster and base-wide overhead system to enable rapid notification of these donors.

The second patient described in the article benefitted from these changes. This 23-year-old male sustained a gunshot wound to the left lower aspect of the abdomen and multiple gunshot wounds to bilateral lower extremities.

The “early and aggressive” use of FWB and plasma provided the necessary endogenous clotting factors and platelets to promote hemostasis in this patient. He received 18 units of PRBCs, 18 units of FFP, 2 units of cryoprecipitate, and 24 units of FWB.

Gaskin and his colleagues said these results suggest that efforts to incorporate a similar resuscitation strategy into civilian practice may improve outcomes, but it warrants continued study.

Fresh frozen plasma

An “early and aggressive” approach to massive blood transfusion can save lives in military combat zones and may provide the same benefit in civilian trauma care as well, according to an article published in the AANA Journal.

The article describes 2 patients who required massive transfusions due to multiple gunshot wounds sustained while in combat zones.

One patient received an inadequate amount of blood products and ultimately died.

But the other patient benefitted from a protocol change to ensure an adequate amount of blood products was delivered quickly.

David Gaskin, CRNA, of Huntsville Memorial Hospital in Texas, and his colleagues described these cases in the journal.

The authors noted that, while providing care in a combat zone, the transfusion of packed red blood cells (PRBC) and fresh frozen plasma (FFP) is performed in a 1:1 ratio. However, the packaging and thawing techniques of the plasma can delay the delivery of blood products and prevent a patient from receiving enough blood.

Another issue in a military environment is the challenge of effectively communicating with live donors on site, which can cause delays in obtaining fresh blood supplies. Both of these issues can have life-threatening consequences for patients.

This is what happened with the first patient described in the article. The 38-year-old man sustained multiple gunshot wounds to the left side of the chest, left side of the back, and flank.

The surgical team was unable to maintain a high ratio of PRBCs to plasma and to infuse an adequate quantity of fresh whole blood (FWB) into this patient. He received 26 units of PRBCs, 5 units of FFP, 3 units of FWB, and 1 unit of cryoprecipitate.

The patient experienced trauma-induced coagulopathy, acidosis, and hypothermia. He died within 2 hours of presentation.

Because of this death, the team identified and implemented a protocol to keep 4 FFP units thawed and ready for immediate use at all times. They also identified and prescreened additional blood donors and implemented a phone roster and base-wide overhead system to enable rapid notification of these donors.

The second patient described in the article benefitted from these changes. This 23-year-old male sustained a gunshot wound to the left lower aspect of the abdomen and multiple gunshot wounds to bilateral lower extremities.

The “early and aggressive” use of FWB and plasma provided the necessary endogenous clotting factors and platelets to promote hemostasis in this patient. He received 18 units of PRBCs, 18 units of FFP, 2 units of cryoprecipitate, and 24 units of FWB.

Gaskin and his colleagues said these results suggest that efforts to incorporate a similar resuscitation strategy into civilian practice may improve outcomes, but it warrants continued study.

Publications
Publications
Topics
Article Type
Display Headline
Protocol could improve massive blood transfusion
Display Headline
Protocol could improve massive blood transfusion
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Drug gets orphan designation for BPDCN

Article Type
Changed
Tue, 11/03/2015 - 06:00
Display Headline
Drug gets orphan designation for BPDCN

Micrograph of dendritic cells

The European Medicines Agency (EMA) has granted orphan drug designation to SL-401 for the treatment of blastic plasmacytoid dendritic cell neoplasm (BPDCN).

SL-401 is a targeted therapy directed to the interleukin-3 receptor (IL-3R), which is present on cancer stem cells and tumor bulk in a range of hematologic malignancies.

The drug is composed of human IL-3 coupled to a truncated diphtheria toxin payload that inhibits protein synthesis.

SL-401 already has orphan designation from the EMA to treat acute myeloid leukemia (AML) and from the US Food and Drug Administration (FDA) for the treatment of AML and BPDCN. The drug is under development by Stemline Therapeutics, Inc.

SL-401 research

At ASH 2012 (abstract 3625), researchers reported results with SL-401 in a study of patients with AML, BPDCN, and myelodysplastic syndromes (MDS).

At that time, the study had enrolled 80 patients, including 59 with relapsed or refractory AML, 11 with de novo AML unfit for chemotherapy, 7 with high-risk MDS, and 3 with relapsed/refractory BPDCN.

Patients received a single cycle of SL-401 as a 15-minute intravenous infusion in 1 of 2 dosing regimens to determine the maximum tolerated dose (MTD) and assess antitumor activity.

With regimen A, 45 patients received doses ranging from 4 μg/kg to 12.5 μg/kg every other day for up to 6 doses. With regimen B, 35 patients received doses ranging from 7.1 μg/kg to 22.1 μg/kg daily for up to 5 doses.

Of the 59 patients with relapsed/refractory AML, 2 achieved complete responses (CRs), 5 had partial responses (PRs), and 8 had minor responses (MRs). One CR lasted more than 8 months, and the other lasted more than 25 months.

Of the 11 patients with AML who were not candidates for chemotherapy, 2 had PRs and 1 had an MR. Among the 7 patients with high-risk MDS, there was 1 PR and 1 MR.

And among the 3 patients with BPDCN, there were 2 CRs. One CR lasted more than 2 months, and the other lasted more than 4 months.

The MTD was not achieved with regimen A, but the MTD for regimen B was 16.6 μg/kg/day. The dose-limiting toxicities were a gastrointestinal bleed (n=1), transaminase and creatinine kinase elevations (n=1), and capillary leak syndrome (n=3). There was no evidence of treatment-related bone marrow suppression.

Last year, researchers reported additional results in BPDCN patients (Frankel et al, Blood 2014).

Eleven BPDCN patients received a single course of SL-401 (at 12.5 μg/kg intravenously over 15 minutes) daily for up to 5 doses. Three patients who had initial responses to SL-401 received a second course while in relapse.

Seven of 9 evaluable (78%) patients responded to a single course of SL-401. There were 5 CRs and 2 PRs. The median duration of responses was 5 months (range, 1-20+ months).

The most common adverse events were transient and included fever, chills, hypotension, edema, hypoalbuminemia, thrombocytopenia, and transaminasemia.

Three multicenter clinical trials of SL-401 are currently open in the following indications:

Additional SL-401 studies are planned for patients with myeloma, lymphomas, and other leukemias.

About orphan designation

In the European Union, orphan designation is granted to therapies intended to treat a life-threatening or chronically debilitating condition that affects no more than 5 in 10,000 persons and where no satisfactory treatment is available.

 

 

Companies that obtain orphan designation for a drug in the European Union benefit from a number of incentives, including protocol assistance, a type of scientific advice specific for designated orphan medicines, and 10 years of market exclusivity once the medicine is on the market. Fee reductions are also available, depending on the status of the sponsor and the type of service required.

The FDA grants orphan designation to drugs that are intended to treat diseases or conditions affecting fewer than 200,000 patients in the US.

In the US, orphan designation provides the sponsor of a drug with various development incentives, including opportunities to apply for research-related tax credits and grant funding, assistance in designing clinical trials, and 7 years of US market exclusivity if the drug is approved.

Publications
Topics

Micrograph of dendritic cells

The European Medicines Agency (EMA) has granted orphan drug designation to SL-401 for the treatment of blastic plasmacytoid dendritic cell neoplasm (BPDCN).

SL-401 is a targeted therapy directed to the interleukin-3 receptor (IL-3R), which is present on cancer stem cells and tumor bulk in a range of hematologic malignancies.

The drug is composed of human IL-3 coupled to a truncated diphtheria toxin payload that inhibits protein synthesis.

SL-401 already has orphan designation from the EMA to treat acute myeloid leukemia (AML) and from the US Food and Drug Administration (FDA) for the treatment of AML and BPDCN. The drug is under development by Stemline Therapeutics, Inc.

SL-401 research

At ASH 2012 (abstract 3625), researchers reported results with SL-401 in a study of patients with AML, BPDCN, and myelodysplastic syndromes (MDS).

At that time, the study had enrolled 80 patients, including 59 with relapsed or refractory AML, 11 with de novo AML unfit for chemotherapy, 7 with high-risk MDS, and 3 with relapsed/refractory BPDCN.

Patients received a single cycle of SL-401 as a 15-minute intravenous infusion in 1 of 2 dosing regimens to determine the maximum tolerated dose (MTD) and assess antitumor activity.

With regimen A, 45 patients received doses ranging from 4 μg/kg to 12.5 μg/kg every other day for up to 6 doses. With regimen B, 35 patients received doses ranging from 7.1 μg/kg to 22.1 μg/kg daily for up to 5 doses.

Of the 59 patients with relapsed/refractory AML, 2 achieved complete responses (CRs), 5 had partial responses (PRs), and 8 had minor responses (MRs). One CR lasted more than 8 months, and the other lasted more than 25 months.

Of the 11 patients with AML who were not candidates for chemotherapy, 2 had PRs and 1 had an MR. Among the 7 patients with high-risk MDS, there was 1 PR and 1 MR.

And among the 3 patients with BPDCN, there were 2 CRs. One CR lasted more than 2 months, and the other lasted more than 4 months.

The MTD was not achieved with regimen A, but the MTD for regimen B was 16.6 μg/kg/day. The dose-limiting toxicities were a gastrointestinal bleed (n=1), transaminase and creatinine kinase elevations (n=1), and capillary leak syndrome (n=3). There was no evidence of treatment-related bone marrow suppression.

Last year, researchers reported additional results in BPDCN patients (Frankel et al, Blood 2014).

Eleven BPDCN patients received a single course of SL-401 (at 12.5 μg/kg intravenously over 15 minutes) daily for up to 5 doses. Three patients who had initial responses to SL-401 received a second course while in relapse.

Seven of 9 evaluable (78%) patients responded to a single course of SL-401. There were 5 CRs and 2 PRs. The median duration of responses was 5 months (range, 1-20+ months).

The most common adverse events were transient and included fever, chills, hypotension, edema, hypoalbuminemia, thrombocytopenia, and transaminasemia.

Three multicenter clinical trials of SL-401 are currently open in the following indications:

Additional SL-401 studies are planned for patients with myeloma, lymphomas, and other leukemias.

About orphan designation

In the European Union, orphan designation is granted to therapies intended to treat a life-threatening or chronically debilitating condition that affects no more than 5 in 10,000 persons and where no satisfactory treatment is available.

 

 

Companies that obtain orphan designation for a drug in the European Union benefit from a number of incentives, including protocol assistance, a type of scientific advice specific for designated orphan medicines, and 10 years of market exclusivity once the medicine is on the market. Fee reductions are also available, depending on the status of the sponsor and the type of service required.

The FDA grants orphan designation to drugs that are intended to treat diseases or conditions affecting fewer than 200,000 patients in the US.

In the US, orphan designation provides the sponsor of a drug with various development incentives, including opportunities to apply for research-related tax credits and grant funding, assistance in designing clinical trials, and 7 years of US market exclusivity if the drug is approved.

Micrograph of dendritic cells

The European Medicines Agency (EMA) has granted orphan drug designation to SL-401 for the treatment of blastic plasmacytoid dendritic cell neoplasm (BPDCN).

SL-401 is a targeted therapy directed to the interleukin-3 receptor (IL-3R), which is present on cancer stem cells and tumor bulk in a range of hematologic malignancies.

The drug is composed of human IL-3 coupled to a truncated diphtheria toxin payload that inhibits protein synthesis.

SL-401 already has orphan designation from the EMA to treat acute myeloid leukemia (AML) and from the US Food and Drug Administration (FDA) for the treatment of AML and BPDCN. The drug is under development by Stemline Therapeutics, Inc.

SL-401 research

At ASH 2012 (abstract 3625), researchers reported results with SL-401 in a study of patients with AML, BPDCN, and myelodysplastic syndromes (MDS).

At that time, the study had enrolled 80 patients, including 59 with relapsed or refractory AML, 11 with de novo AML unfit for chemotherapy, 7 with high-risk MDS, and 3 with relapsed/refractory BPDCN.

Patients received a single cycle of SL-401 as a 15-minute intravenous infusion in 1 of 2 dosing regimens to determine the maximum tolerated dose (MTD) and assess antitumor activity.

With regimen A, 45 patients received doses ranging from 4 μg/kg to 12.5 μg/kg every other day for up to 6 doses. With regimen B, 35 patients received doses ranging from 7.1 μg/kg to 22.1 μg/kg daily for up to 5 doses.

Of the 59 patients with relapsed/refractory AML, 2 achieved complete responses (CRs), 5 had partial responses (PRs), and 8 had minor responses (MRs). One CR lasted more than 8 months, and the other lasted more than 25 months.

Of the 11 patients with AML who were not candidates for chemotherapy, 2 had PRs and 1 had an MR. Among the 7 patients with high-risk MDS, there was 1 PR and 1 MR.

And among the 3 patients with BPDCN, there were 2 CRs. One CR lasted more than 2 months, and the other lasted more than 4 months.

The MTD was not achieved with regimen A, but the MTD for regimen B was 16.6 μg/kg/day. The dose-limiting toxicities were a gastrointestinal bleed (n=1), transaminase and creatinine kinase elevations (n=1), and capillary leak syndrome (n=3). There was no evidence of treatment-related bone marrow suppression.

Last year, researchers reported additional results in BPDCN patients (Frankel et al, Blood 2014).

Eleven BPDCN patients received a single course of SL-401 (at 12.5 μg/kg intravenously over 15 minutes) daily for up to 5 doses. Three patients who had initial responses to SL-401 received a second course while in relapse.

Seven of 9 evaluable (78%) patients responded to a single course of SL-401. There were 5 CRs and 2 PRs. The median duration of responses was 5 months (range, 1-20+ months).

The most common adverse events were transient and included fever, chills, hypotension, edema, hypoalbuminemia, thrombocytopenia, and transaminasemia.

Three multicenter clinical trials of SL-401 are currently open in the following indications:

Additional SL-401 studies are planned for patients with myeloma, lymphomas, and other leukemias.

About orphan designation

In the European Union, orphan designation is granted to therapies intended to treat a life-threatening or chronically debilitating condition that affects no more than 5 in 10,000 persons and where no satisfactory treatment is available.

 

 

Companies that obtain orphan designation for a drug in the European Union benefit from a number of incentives, including protocol assistance, a type of scientific advice specific for designated orphan medicines, and 10 years of market exclusivity once the medicine is on the market. Fee reductions are also available, depending on the status of the sponsor and the type of service required.

The FDA grants orphan designation to drugs that are intended to treat diseases or conditions affecting fewer than 200,000 patients in the US.

In the US, orphan designation provides the sponsor of a drug with various development incentives, including opportunities to apply for research-related tax credits and grant funding, assistance in designing clinical trials, and 7 years of US market exclusivity if the drug is approved.

Publications
Publications
Topics
Article Type
Display Headline
Drug gets orphan designation for BPDCN
Display Headline
Drug gets orphan designation for BPDCN
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Metacognition to Reduce Medical Error

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

Files
References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
Article PDF
Issue
Journal of Hospital Medicine - 11(2)
Page Number
120-122
Sections
Files
Files
Article PDF
Article PDF

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
Issue
Journal of Hospital Medicine - 11(2)
Issue
Journal of Hospital Medicine - 11(2)
Page Number
120-122
Page Number
120-122
Article Type
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence: Dr. Allan Detsky, MD, Mount Sinai Hospital, Room 429, 600 University Ave., Toronto, Ontario M5G 1X5, Canada; Telephone: 416‐586‐8507; Fax: 416‐586‐8350; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Chemotherapy Does Not Improve Quality of Life with End-Stage Cancer

Article Type
Changed
Thu, 12/15/2022 - 16:07
Display Headline
Chemotherapy Does Not Improve Quality of Life with End-Stage Cancer

Clinical question: Does palliative chemotherapy improve quality of life (QOL) in patients with end-stage cancer, regardless of performance status?

Background: There is continued debate about the benefit of palliative chemotherapy at the end of life. Guidelines recommend a good performance score as an indicator of appropriate use of therapy; however, little is known about the benefits and harms of chemotherapy in metastatic cancer patients stratified by performance status.

Study design: Longitudinal, prospective cohort study.

Setting: Multi-institutional in the United States.

Synopsis: Five U.S. institutions enrolled 661 patients with metastatic cancer and estimated life expectancy less than six months; 312 patients who died during the study period were included in the final analysis of postmortem questionnaires of caretakers regarding QOL in the patients’ last week of life. Contrary to current thought, the study demonstrated that patients undergoing end-of-life palliative chemotherapy with good ECOG performance status (0-1) had significantly worse QOL than those avoiding palliative chemotherapy. There was no difference in QOL in patients with worse performance status (ECOG 2-3).

This study is one of the first prospective investigations of this topic and makes a compelling case for withholding palliative chemotherapy at the end of life regardless of performance status. The study is somewhat limited in that the QOL measurement is only for the last week of life and the patients were not randomized into the chemotherapy arm, which could bias results.

Bottom line: Palliative chemotherapy does not improve QOL near death, and may actually worsen QOL in patients with good performance status.

Citation: Prigerson HG, Bao Y, Shah MA, et al. Chemotherapy use, performance status, and quality of life at the end of life. JAMA Oncol. 2015;1(6):778-784.

Issue
The Hospitalist - 2015(11)
Publications
Sections

Clinical question: Does palliative chemotherapy improve quality of life (QOL) in patients with end-stage cancer, regardless of performance status?

Background: There is continued debate about the benefit of palliative chemotherapy at the end of life. Guidelines recommend a good performance score as an indicator of appropriate use of therapy; however, little is known about the benefits and harms of chemotherapy in metastatic cancer patients stratified by performance status.

Study design: Longitudinal, prospective cohort study.

Setting: Multi-institutional in the United States.

Synopsis: Five U.S. institutions enrolled 661 patients with metastatic cancer and estimated life expectancy less than six months; 312 patients who died during the study period were included in the final analysis of postmortem questionnaires of caretakers regarding QOL in the patients’ last week of life. Contrary to current thought, the study demonstrated that patients undergoing end-of-life palliative chemotherapy with good ECOG performance status (0-1) had significantly worse QOL than those avoiding palliative chemotherapy. There was no difference in QOL in patients with worse performance status (ECOG 2-3).

This study is one of the first prospective investigations of this topic and makes a compelling case for withholding palliative chemotherapy at the end of life regardless of performance status. The study is somewhat limited in that the QOL measurement is only for the last week of life and the patients were not randomized into the chemotherapy arm, which could bias results.

Bottom line: Palliative chemotherapy does not improve QOL near death, and may actually worsen QOL in patients with good performance status.

Citation: Prigerson HG, Bao Y, Shah MA, et al. Chemotherapy use, performance status, and quality of life at the end of life. JAMA Oncol. 2015;1(6):778-784.

Clinical question: Does palliative chemotherapy improve quality of life (QOL) in patients with end-stage cancer, regardless of performance status?

Background: There is continued debate about the benefit of palliative chemotherapy at the end of life. Guidelines recommend a good performance score as an indicator of appropriate use of therapy; however, little is known about the benefits and harms of chemotherapy in metastatic cancer patients stratified by performance status.

Study design: Longitudinal, prospective cohort study.

Setting: Multi-institutional in the United States.

Synopsis: Five U.S. institutions enrolled 661 patients with metastatic cancer and estimated life expectancy less than six months; 312 patients who died during the study period were included in the final analysis of postmortem questionnaires of caretakers regarding QOL in the patients’ last week of life. Contrary to current thought, the study demonstrated that patients undergoing end-of-life palliative chemotherapy with good ECOG performance status (0-1) had significantly worse QOL than those avoiding palliative chemotherapy. There was no difference in QOL in patients with worse performance status (ECOG 2-3).

This study is one of the first prospective investigations of this topic and makes a compelling case for withholding palliative chemotherapy at the end of life regardless of performance status. The study is somewhat limited in that the QOL measurement is only for the last week of life and the patients were not randomized into the chemotherapy arm, which could bias results.

Bottom line: Palliative chemotherapy does not improve QOL near death, and may actually worsen QOL in patients with good performance status.

Citation: Prigerson HG, Bao Y, Shah MA, et al. Chemotherapy use, performance status, and quality of life at the end of life. JAMA Oncol. 2015;1(6):778-784.

Issue
The Hospitalist - 2015(11)
Issue
The Hospitalist - 2015(11)
Publications
Publications
Article Type
Display Headline
Chemotherapy Does Not Improve Quality of Life with End-Stage Cancer
Display Headline
Chemotherapy Does Not Improve Quality of Life with End-Stage Cancer
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Sliding-Scale Insulin Does Not Improve Blood Glucose Control in Hospitalized Patients

Article Type
Changed
Fri, 09/14/2018 - 12:07
Display Headline
Sliding-Scale Insulin Does Not Improve Blood Glucose Control in Hospitalized Patients

Clinical question: Does the use of sliding-scale insulin improve blood glucose control in hospitalized patients?

Bottom line: Sliding-scale insulin is commonly used to manage hyperglycemia in hospitalized patients. The evidence suggests that this regimen does not result in better blood glucose control. (LOE = 1a-)

Reference: Lee Y, Lin Y, Leu W et al. Sliding-scale insulin used for blood glucose control: a meta-analysis of randomized controlled trials. Metabolism 2015;64:1183-1192.

Study design: Meta-analysis (randomized controlled trials)

Funding source: Government

Allocation: Uncertain

Setting: Inpatient (any location)

Synopsis: These investigators searched multiple databases including PubMed, EMBASE, and the Cochrane Library to find randomized controlled trials that evaluated the efficacy of sliding-scale insulin to manage hyperglycemia in hospitalized patients. Two authors independently evaluated the studies for inclusion, extracted the data, and performed quality assessments.

Eight of the 11 included studies compared regular insulin sliding scale (RISS) regimens with non–sliding-scale regimens. All RISS regimens consisted of subcutaneous regular insulin injections according to patients' blood glucose levels. Non–sliding-scale regimens consisted of basal-bolus or basal insulin regimens, continuous intravenous insulin infusions, and closed-loop artificial pancreas systems. Target blood glucose levels for individual studies varied greatly and included a range of 100 mg/dL to 150 mg/dL, a goal of less than 140 mg/dL, and a goal of less than 180 mg/dL. Hypoglycemia was generally defined as a glucose level of less than 70 mg/dL, though three of the studies had an even lower cut-off.

In the two studies that evaluated hyperglycemia, one defined it as a glucose level greater than 180 mg/dL while the other defined it as greater than 240 mg/dL. A meta-analysis of relevant data showed no significant difference in the percentage of patients who achieved an average blood glucose level in the target range when comparing RISS with non–sliding-scale regimens. The trend, however, favored the non–sliding-scale group and the difference became significant (relative risk 1.48, 95% CI 1.09-2.02) after one study with a very wide confidence interval was removed. Furthermore, the incidence of hyperglycemia and the mean blood glucose levels were significantly higher in the RISS group.

Although overall hypoglycemic episodes occurred more frequently in the non–sliding-scale group, there was no significant difference detected in the incidence of severe or symptomatic hypoglycemia. Length of hospital stay was also similar in both groups. Finally, one study compared the use of routine diabetes medications plus RISS with routine diabetes medications alone and found no difference in the number of hypoglycemic or hyperglycemic events.

Significant heterogeneity was detected in the results of this meta-analysis and can be attributed to the differing patient populations, insulin regimens, and working definitions in the individual studies as noted above.

Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.

Issue
The Hospitalist - 2015(11)
Publications
Sections

Clinical question: Does the use of sliding-scale insulin improve blood glucose control in hospitalized patients?

Bottom line: Sliding-scale insulin is commonly used to manage hyperglycemia in hospitalized patients. The evidence suggests that this regimen does not result in better blood glucose control. (LOE = 1a-)

Reference: Lee Y, Lin Y, Leu W et al. Sliding-scale insulin used for blood glucose control: a meta-analysis of randomized controlled trials. Metabolism 2015;64:1183-1192.

Study design: Meta-analysis (randomized controlled trials)

Funding source: Government

Allocation: Uncertain

Setting: Inpatient (any location)

Synopsis: These investigators searched multiple databases including PubMed, EMBASE, and the Cochrane Library to find randomized controlled trials that evaluated the efficacy of sliding-scale insulin to manage hyperglycemia in hospitalized patients. Two authors independently evaluated the studies for inclusion, extracted the data, and performed quality assessments.

Eight of the 11 included studies compared regular insulin sliding scale (RISS) regimens with non–sliding-scale regimens. All RISS regimens consisted of subcutaneous regular insulin injections according to patients' blood glucose levels. Non–sliding-scale regimens consisted of basal-bolus or basal insulin regimens, continuous intravenous insulin infusions, and closed-loop artificial pancreas systems. Target blood glucose levels for individual studies varied greatly and included a range of 100 mg/dL to 150 mg/dL, a goal of less than 140 mg/dL, and a goal of less than 180 mg/dL. Hypoglycemia was generally defined as a glucose level of less than 70 mg/dL, though three of the studies had an even lower cut-off.

In the two studies that evaluated hyperglycemia, one defined it as a glucose level greater than 180 mg/dL while the other defined it as greater than 240 mg/dL. A meta-analysis of relevant data showed no significant difference in the percentage of patients who achieved an average blood glucose level in the target range when comparing RISS with non–sliding-scale regimens. The trend, however, favored the non–sliding-scale group and the difference became significant (relative risk 1.48, 95% CI 1.09-2.02) after one study with a very wide confidence interval was removed. Furthermore, the incidence of hyperglycemia and the mean blood glucose levels were significantly higher in the RISS group.

Although overall hypoglycemic episodes occurred more frequently in the non–sliding-scale group, there was no significant difference detected in the incidence of severe or symptomatic hypoglycemia. Length of hospital stay was also similar in both groups. Finally, one study compared the use of routine diabetes medications plus RISS with routine diabetes medications alone and found no difference in the number of hypoglycemic or hyperglycemic events.

Significant heterogeneity was detected in the results of this meta-analysis and can be attributed to the differing patient populations, insulin regimens, and working definitions in the individual studies as noted above.

Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.

Clinical question: Does the use of sliding-scale insulin improve blood glucose control in hospitalized patients?

Bottom line: Sliding-scale insulin is commonly used to manage hyperglycemia in hospitalized patients. The evidence suggests that this regimen does not result in better blood glucose control. (LOE = 1a-)

Reference: Lee Y, Lin Y, Leu W et al. Sliding-scale insulin used for blood glucose control: a meta-analysis of randomized controlled trials. Metabolism 2015;64:1183-1192.

Study design: Meta-analysis (randomized controlled trials)

Funding source: Government

Allocation: Uncertain

Setting: Inpatient (any location)

Synopsis: These investigators searched multiple databases including PubMed, EMBASE, and the Cochrane Library to find randomized controlled trials that evaluated the efficacy of sliding-scale insulin to manage hyperglycemia in hospitalized patients. Two authors independently evaluated the studies for inclusion, extracted the data, and performed quality assessments.

Eight of the 11 included studies compared regular insulin sliding scale (RISS) regimens with non–sliding-scale regimens. All RISS regimens consisted of subcutaneous regular insulin injections according to patients' blood glucose levels. Non–sliding-scale regimens consisted of basal-bolus or basal insulin regimens, continuous intravenous insulin infusions, and closed-loop artificial pancreas systems. Target blood glucose levels for individual studies varied greatly and included a range of 100 mg/dL to 150 mg/dL, a goal of less than 140 mg/dL, and a goal of less than 180 mg/dL. Hypoglycemia was generally defined as a glucose level of less than 70 mg/dL, though three of the studies had an even lower cut-off.

In the two studies that evaluated hyperglycemia, one defined it as a glucose level greater than 180 mg/dL while the other defined it as greater than 240 mg/dL. A meta-analysis of relevant data showed no significant difference in the percentage of patients who achieved an average blood glucose level in the target range when comparing RISS with non–sliding-scale regimens. The trend, however, favored the non–sliding-scale group and the difference became significant (relative risk 1.48, 95% CI 1.09-2.02) after one study with a very wide confidence interval was removed. Furthermore, the incidence of hyperglycemia and the mean blood glucose levels were significantly higher in the RISS group.

Although overall hypoglycemic episodes occurred more frequently in the non–sliding-scale group, there was no significant difference detected in the incidence of severe or symptomatic hypoglycemia. Length of hospital stay was also similar in both groups. Finally, one study compared the use of routine diabetes medications plus RISS with routine diabetes medications alone and found no difference in the number of hypoglycemic or hyperglycemic events.

Significant heterogeneity was detected in the results of this meta-analysis and can be attributed to the differing patient populations, insulin regimens, and working definitions in the individual studies as noted above.

Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.

Issue
The Hospitalist - 2015(11)
Issue
The Hospitalist - 2015(11)
Publications
Publications
Article Type
Display Headline
Sliding-Scale Insulin Does Not Improve Blood Glucose Control in Hospitalized Patients
Display Headline
Sliding-Scale Insulin Does Not Improve Blood Glucose Control in Hospitalized Patients
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Subclavian Central Lines Have Fewer Infections, Clots; Increased Risk of Pneumothorax

Article Type
Changed
Fri, 09/14/2018 - 12:07
Display Headline
Subclavian Central Lines Have Fewer Infections, Clots; Increased Risk of Pneumothorax

Clinical question: Which insertion site for central venous catheterization results in fewer complications?

Bottom line: Central venous catheterization via a subclavian insertion site, as compared with femoral and jugular sites, decreases the risk of bloodstream infections and symptomatic deep vein thromboses (DVTs), but results in more pneumothoraces. This risk could potentially be mitigated with the use of ultrasound guidance during catheter insertion. ((LOE = 1b)

Reference: Parienti JJ, Mongardon N, Mégarbane B, et al. Intravascular complications of central venous catheterization by insertion site. N Engl J Med 2015;373(13):1220-1229.

Study design: Randomized controlled trial (nonblinded)

Funding source: Government

Allocation: Concealed

Setting: Inpatient (ICU only)

Synopsis

These investigators randomized 3027 patients in the intensive care unit who required nontunneled central venous access to receive 3471 intravenous catheters at one of three insertion sites: subclavian, jugular, or femoral. The catheters were placed by residents or staff physicians who had prior experience in the procedure. All patients had peripheral blood cultures and catheter tip cultures sent at the time of catheter removal. Patients also underwent compression ultrasonography at the insertion site within two days of catheter removal to assess for DVT. The three groups were well-balanced at baseline and the median duration of catheter use was five days. Analysis was by intention to treat.

The primary composite endpoint of catheter-related bloodstream infections and symptomatic DVTs occurred less frequently in the subclavian group than in the other two groups (1.5 events per 1000 catheter-days in the subclavian group, 3.6 in the jugular group, 4.6 in the femoral group). The risk of this outcome was greater in both the femoral and jugular groups when compared directly with the subclavian group (femoral vs subclavian: hazard ratio [HR] = 3.5; 95% CI 1.5-7.8; P = .003; femoral vs jugular: HR = 2.1; 1.0-4.3; P = .04). The subclavian group, however, did have the highest risk of mechanical complications, mainly pneumothoraces.

When all three bad outcomes (infections, DVTs, mechanical complications) are pooled together, the differences between the three groups are not as compelling (percentage of catheters with overall complications: 3.1% subclavian, 3.7% jugular, 3.4% femoral).

Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.

Issue
The Hospitalist - 2015(11)
Publications
Sections

Clinical question: Which insertion site for central venous catheterization results in fewer complications?

Bottom line: Central venous catheterization via a subclavian insertion site, as compared with femoral and jugular sites, decreases the risk of bloodstream infections and symptomatic deep vein thromboses (DVTs), but results in more pneumothoraces. This risk could potentially be mitigated with the use of ultrasound guidance during catheter insertion. ((LOE = 1b)

Reference: Parienti JJ, Mongardon N, Mégarbane B, et al. Intravascular complications of central venous catheterization by insertion site. N Engl J Med 2015;373(13):1220-1229.

Study design: Randomized controlled trial (nonblinded)

Funding source: Government

Allocation: Concealed

Setting: Inpatient (ICU only)

Synopsis

These investigators randomized 3027 patients in the intensive care unit who required nontunneled central venous access to receive 3471 intravenous catheters at one of three insertion sites: subclavian, jugular, or femoral. The catheters were placed by residents or staff physicians who had prior experience in the procedure. All patients had peripheral blood cultures and catheter tip cultures sent at the time of catheter removal. Patients also underwent compression ultrasonography at the insertion site within two days of catheter removal to assess for DVT. The three groups were well-balanced at baseline and the median duration of catheter use was five days. Analysis was by intention to treat.

The primary composite endpoint of catheter-related bloodstream infections and symptomatic DVTs occurred less frequently in the subclavian group than in the other two groups (1.5 events per 1000 catheter-days in the subclavian group, 3.6 in the jugular group, 4.6 in the femoral group). The risk of this outcome was greater in both the femoral and jugular groups when compared directly with the subclavian group (femoral vs subclavian: hazard ratio [HR] = 3.5; 95% CI 1.5-7.8; P = .003; femoral vs jugular: HR = 2.1; 1.0-4.3; P = .04). The subclavian group, however, did have the highest risk of mechanical complications, mainly pneumothoraces.

When all three bad outcomes (infections, DVTs, mechanical complications) are pooled together, the differences between the three groups are not as compelling (percentage of catheters with overall complications: 3.1% subclavian, 3.7% jugular, 3.4% femoral).

Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.

Clinical question: Which insertion site for central venous catheterization results in fewer complications?

Bottom line: Central venous catheterization via a subclavian insertion site, as compared with femoral and jugular sites, decreases the risk of bloodstream infections and symptomatic deep vein thromboses (DVTs), but results in more pneumothoraces. This risk could potentially be mitigated with the use of ultrasound guidance during catheter insertion. ((LOE = 1b)

Reference: Parienti JJ, Mongardon N, Mégarbane B, et al. Intravascular complications of central venous catheterization by insertion site. N Engl J Med 2015;373(13):1220-1229.

Study design: Randomized controlled trial (nonblinded)

Funding source: Government

Allocation: Concealed

Setting: Inpatient (ICU only)

Synopsis

These investigators randomized 3027 patients in the intensive care unit who required nontunneled central venous access to receive 3471 intravenous catheters at one of three insertion sites: subclavian, jugular, or femoral. The catheters were placed by residents or staff physicians who had prior experience in the procedure. All patients had peripheral blood cultures and catheter tip cultures sent at the time of catheter removal. Patients also underwent compression ultrasonography at the insertion site within two days of catheter removal to assess for DVT. The three groups were well-balanced at baseline and the median duration of catheter use was five days. Analysis was by intention to treat.

The primary composite endpoint of catheter-related bloodstream infections and symptomatic DVTs occurred less frequently in the subclavian group than in the other two groups (1.5 events per 1000 catheter-days in the subclavian group, 3.6 in the jugular group, 4.6 in the femoral group). The risk of this outcome was greater in both the femoral and jugular groups when compared directly with the subclavian group (femoral vs subclavian: hazard ratio [HR] = 3.5; 95% CI 1.5-7.8; P = .003; femoral vs jugular: HR = 2.1; 1.0-4.3; P = .04). The subclavian group, however, did have the highest risk of mechanical complications, mainly pneumothoraces.

When all three bad outcomes (infections, DVTs, mechanical complications) are pooled together, the differences between the three groups are not as compelling (percentage of catheters with overall complications: 3.1% subclavian, 3.7% jugular, 3.4% femoral).

Dr. Kulkarni is an assistant professor of hospital medicine at Northwestern University in Chicago.

Issue
The Hospitalist - 2015(11)
Issue
The Hospitalist - 2015(11)
Publications
Publications
Article Type
Display Headline
Subclavian Central Lines Have Fewer Infections, Clots; Increased Risk of Pneumothorax
Display Headline
Subclavian Central Lines Have Fewer Infections, Clots; Increased Risk of Pneumothorax
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Depression and Psoriasis

Article Type
Changed
Thu, 12/15/2022 - 15:00
Display Headline
Depression and Psoriasis

While psoriasis is a known risk factor for depression, depression can also exacerbate or trigger psoriasis. This relationship between depression and psoriasis, however, remains to be fully explored.

In an article published online on September 30 in JAMA Dermatology , Cohen et al examined the association between psoriasis and major depression in the US population. The authors conducted a population-based study that utilized individuals who were participating in the National Health and Nutrition Examination Survey from 2009 through 2012.

The authors identified 351 (2.8%) cases of psoriasis and 968 (7.8%) cases of major depression in the 12,382 participants included in the study. Of the patients with psoriasis, 58 (16.5%) met criteria for major depression. The mean (standard deviation) Patient Health Questionnaire-9 score was significantly higher among patients with a history of psoriasis than those without psoriasis (4.54 [5.7] vs 3.22 [4.3], P<.001). After adjustment for sex, age, race, body mass index, physical activity level, smoking history, alcohol use, history of myocardial infarction, history of stroke, and history of diabetes mellitus (odds ratio, 2.09 [95% confidence interval, 1.41–3.11], P<.001), psoriasis was significantly associated with major depression. Having a history of cardiovascular events did not modify the risk of major depression for patients with psoriasis. The investigators also found that the risk of major depression was not significantly different between patients with limited vs extensive psoriasis (odds ratio, 0.66 [95% confidence interval, 0.18–2.44], P=.53).

What’s the Issue?

We know that psoriasis is associated with depression. This study, however, has some surprising findings. The severity of psoriasis was unrelated to the risk of major depression. Additionally, cardiovascular events did not seem to impact major depression in participants with psoriasis. Therefore, all patients with psoriasis, regardless of severity, may be at risk for major depression. Will these findings impact your evaluation of psychological issues in individuals with psoriasis?

We want to know your views! Tell us what you think.

Author and Disclosure Information

Dr. Weinberg is from the Icahn School of Medicine at Mount Sinai, New York, New York.

Dr. Weinberg reports no conflicts of interest in relation to this post.

Publications
Topics
Legacy Keywords
psoriasis, depression, cardiovascular, comorbidity, major depression
Sections
Author and Disclosure Information

Dr. Weinberg is from the Icahn School of Medicine at Mount Sinai, New York, New York.

Dr. Weinberg reports no conflicts of interest in relation to this post.

Author and Disclosure Information

Dr. Weinberg is from the Icahn School of Medicine at Mount Sinai, New York, New York.

Dr. Weinberg reports no conflicts of interest in relation to this post.

Related Articles

While psoriasis is a known risk factor for depression, depression can also exacerbate or trigger psoriasis. This relationship between depression and psoriasis, however, remains to be fully explored.

In an article published online on September 30 in JAMA Dermatology , Cohen et al examined the association between psoriasis and major depression in the US population. The authors conducted a population-based study that utilized individuals who were participating in the National Health and Nutrition Examination Survey from 2009 through 2012.

The authors identified 351 (2.8%) cases of psoriasis and 968 (7.8%) cases of major depression in the 12,382 participants included in the study. Of the patients with psoriasis, 58 (16.5%) met criteria for major depression. The mean (standard deviation) Patient Health Questionnaire-9 score was significantly higher among patients with a history of psoriasis than those without psoriasis (4.54 [5.7] vs 3.22 [4.3], P<.001). After adjustment for sex, age, race, body mass index, physical activity level, smoking history, alcohol use, history of myocardial infarction, history of stroke, and history of diabetes mellitus (odds ratio, 2.09 [95% confidence interval, 1.41–3.11], P<.001), psoriasis was significantly associated with major depression. Having a history of cardiovascular events did not modify the risk of major depression for patients with psoriasis. The investigators also found that the risk of major depression was not significantly different between patients with limited vs extensive psoriasis (odds ratio, 0.66 [95% confidence interval, 0.18–2.44], P=.53).

What’s the Issue?

We know that psoriasis is associated with depression. This study, however, has some surprising findings. The severity of psoriasis was unrelated to the risk of major depression. Additionally, cardiovascular events did not seem to impact major depression in participants with psoriasis. Therefore, all patients with psoriasis, regardless of severity, may be at risk for major depression. Will these findings impact your evaluation of psychological issues in individuals with psoriasis?

We want to know your views! Tell us what you think.

While psoriasis is a known risk factor for depression, depression can also exacerbate or trigger psoriasis. This relationship between depression and psoriasis, however, remains to be fully explored.

In an article published online on September 30 in JAMA Dermatology , Cohen et al examined the association between psoriasis and major depression in the US population. The authors conducted a population-based study that utilized individuals who were participating in the National Health and Nutrition Examination Survey from 2009 through 2012.

The authors identified 351 (2.8%) cases of psoriasis and 968 (7.8%) cases of major depression in the 12,382 participants included in the study. Of the patients with psoriasis, 58 (16.5%) met criteria for major depression. The mean (standard deviation) Patient Health Questionnaire-9 score was significantly higher among patients with a history of psoriasis than those without psoriasis (4.54 [5.7] vs 3.22 [4.3], P<.001). After adjustment for sex, age, race, body mass index, physical activity level, smoking history, alcohol use, history of myocardial infarction, history of stroke, and history of diabetes mellitus (odds ratio, 2.09 [95% confidence interval, 1.41–3.11], P<.001), psoriasis was significantly associated with major depression. Having a history of cardiovascular events did not modify the risk of major depression for patients with psoriasis. The investigators also found that the risk of major depression was not significantly different between patients with limited vs extensive psoriasis (odds ratio, 0.66 [95% confidence interval, 0.18–2.44], P=.53).

What’s the Issue?

We know that psoriasis is associated with depression. This study, however, has some surprising findings. The severity of psoriasis was unrelated to the risk of major depression. Additionally, cardiovascular events did not seem to impact major depression in participants with psoriasis. Therefore, all patients with psoriasis, regardless of severity, may be at risk for major depression. Will these findings impact your evaluation of psychological issues in individuals with psoriasis?

We want to know your views! Tell us what you think.

Publications
Publications
Topics
Article Type
Display Headline
Depression and Psoriasis
Display Headline
Depression and Psoriasis
Legacy Keywords
psoriasis, depression, cardiovascular, comorbidity, major depression
Legacy Keywords
psoriasis, depression, cardiovascular, comorbidity, major depression
Sections
Disallow All Ads
Alternative CME

Adjuvant imatinib for 3 years better than 1 year in high-risk GIST

Article Type
Changed
Wed, 05/26/2021 - 13:56
Display Headline
Adjuvant imatinib for 3 years better than 1 year in high-risk GIST

After surgery for high-risk gastrointestinal stromal tumor (GIST), patients who received adjuvant imatinib for 3 years achieved longer relapse-free survival (RFS) and overall survival (OS) compared with those treated for 1 year, according to a study published online in Journal of Clinical Oncology.

With a median follow up of 7.5 years, the 5-year survival rates of greater than 90% represent the highest reported to date in high-risk GIST.

“We speculate that, other than adjuvant imatinib, two procedures were crucially important for achieving the high overall survival rates: longitudinal monitoring of the abdomen with CT to detect GIST recurrence early when the tumor bulk was still small and restarting of imatinib after recurrence was detected,” wrote Dr. Heikki Joensuu of the Comprehensive Cancer Center Helsinki and University of Helsinki, Finland, and colleagues.

Five-year RFS rates for 3- and 1-year treatment durations were 71.1% and 52.3%, respectively (hazard ratio, 0.60; 95% CI, 0.44-0.81; P less than .001); 5-year OS rates were 91.9% and 85.3%, respectively (HR, 0.60; 95% CI, 0.37-0.97; P = .036), the investigators reported (J Clin Oncol. 2015 Nov. 2. doi: 10.1200/JCO.2015.62.9170).

After a median follow up of 90 months, the second planned analysis of the open-label Scandinavian Sarcoma Group XVIII/AIO study evaluated outcomes of 358 patients, 181 in the 12-month group and 177 in the 36-month group. Earlier results from the SSGXVIII/AIO trial (after a 4.5-year follow up) showed significantly longer survival in patients who received imatinib for 3 years versus 1 year, and these results have informed treatment guidelines.

However, two other large randomized trials evaluated adjuvant imatinib for durations less than 3 years in patients with lower-risk GIST, and neither study found a survival benefit. The investigators point out that because low- or intermediate-risk GIST is cured with surgery alone in the great majority of patients, most do not benefit from adjuvant imatinib.

“Hypothetically, these results suggest that obtaining overall survival benefit may require durable administrations of imatinib and that the patients at high risk for recurrence are the optimal target population,” they wrote.

All but two patients reported at least one adverse event, but most events were mild. Previous reports have suggested cardiac toxicity of imatinib, but only one patient had cardiac failure, perhaps due to the low 400 mg adjuvant daily dosage.

Dr. Joensuu reported consulting or advisory roles with Blueprint Medicines, ARIAD Pharmaceuticals, and Orion Pharma. Several of his coauthors reported ties to industry sources.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

After surgery for high-risk gastrointestinal stromal tumor (GIST), patients who received adjuvant imatinib for 3 years achieved longer relapse-free survival (RFS) and overall survival (OS) compared with those treated for 1 year, according to a study published online in Journal of Clinical Oncology.

With a median follow up of 7.5 years, the 5-year survival rates of greater than 90% represent the highest reported to date in high-risk GIST.

“We speculate that, other than adjuvant imatinib, two procedures were crucially important for achieving the high overall survival rates: longitudinal monitoring of the abdomen with CT to detect GIST recurrence early when the tumor bulk was still small and restarting of imatinib after recurrence was detected,” wrote Dr. Heikki Joensuu of the Comprehensive Cancer Center Helsinki and University of Helsinki, Finland, and colleagues.

Five-year RFS rates for 3- and 1-year treatment durations were 71.1% and 52.3%, respectively (hazard ratio, 0.60; 95% CI, 0.44-0.81; P less than .001); 5-year OS rates were 91.9% and 85.3%, respectively (HR, 0.60; 95% CI, 0.37-0.97; P = .036), the investigators reported (J Clin Oncol. 2015 Nov. 2. doi: 10.1200/JCO.2015.62.9170).

After a median follow up of 90 months, the second planned analysis of the open-label Scandinavian Sarcoma Group XVIII/AIO study evaluated outcomes of 358 patients, 181 in the 12-month group and 177 in the 36-month group. Earlier results from the SSGXVIII/AIO trial (after a 4.5-year follow up) showed significantly longer survival in patients who received imatinib for 3 years versus 1 year, and these results have informed treatment guidelines.

However, two other large randomized trials evaluated adjuvant imatinib for durations less than 3 years in patients with lower-risk GIST, and neither study found a survival benefit. The investigators point out that because low- or intermediate-risk GIST is cured with surgery alone in the great majority of patients, most do not benefit from adjuvant imatinib.

“Hypothetically, these results suggest that obtaining overall survival benefit may require durable administrations of imatinib and that the patients at high risk for recurrence are the optimal target population,” they wrote.

All but two patients reported at least one adverse event, but most events were mild. Previous reports have suggested cardiac toxicity of imatinib, but only one patient had cardiac failure, perhaps due to the low 400 mg adjuvant daily dosage.

Dr. Joensuu reported consulting or advisory roles with Blueprint Medicines, ARIAD Pharmaceuticals, and Orion Pharma. Several of his coauthors reported ties to industry sources.

After surgery for high-risk gastrointestinal stromal tumor (GIST), patients who received adjuvant imatinib for 3 years achieved longer relapse-free survival (RFS) and overall survival (OS) compared with those treated for 1 year, according to a study published online in Journal of Clinical Oncology.

With a median follow up of 7.5 years, the 5-year survival rates of greater than 90% represent the highest reported to date in high-risk GIST.

“We speculate that, other than adjuvant imatinib, two procedures were crucially important for achieving the high overall survival rates: longitudinal monitoring of the abdomen with CT to detect GIST recurrence early when the tumor bulk was still small and restarting of imatinib after recurrence was detected,” wrote Dr. Heikki Joensuu of the Comprehensive Cancer Center Helsinki and University of Helsinki, Finland, and colleagues.

Five-year RFS rates for 3- and 1-year treatment durations were 71.1% and 52.3%, respectively (hazard ratio, 0.60; 95% CI, 0.44-0.81; P less than .001); 5-year OS rates were 91.9% and 85.3%, respectively (HR, 0.60; 95% CI, 0.37-0.97; P = .036), the investigators reported (J Clin Oncol. 2015 Nov. 2. doi: 10.1200/JCO.2015.62.9170).

After a median follow up of 90 months, the second planned analysis of the open-label Scandinavian Sarcoma Group XVIII/AIO study evaluated outcomes of 358 patients, 181 in the 12-month group and 177 in the 36-month group. Earlier results from the SSGXVIII/AIO trial (after a 4.5-year follow up) showed significantly longer survival in patients who received imatinib for 3 years versus 1 year, and these results have informed treatment guidelines.

However, two other large randomized trials evaluated adjuvant imatinib for durations less than 3 years in patients with lower-risk GIST, and neither study found a survival benefit. The investigators point out that because low- or intermediate-risk GIST is cured with surgery alone in the great majority of patients, most do not benefit from adjuvant imatinib.

“Hypothetically, these results suggest that obtaining overall survival benefit may require durable administrations of imatinib and that the patients at high risk for recurrence are the optimal target population,” they wrote.

All but two patients reported at least one adverse event, but most events were mild. Previous reports have suggested cardiac toxicity of imatinib, but only one patient had cardiac failure, perhaps due to the low 400 mg adjuvant daily dosage.

Dr. Joensuu reported consulting or advisory roles with Blueprint Medicines, ARIAD Pharmaceuticals, and Orion Pharma. Several of his coauthors reported ties to industry sources.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Adjuvant imatinib for 3 years better than 1 year in high-risk GIST
Display Headline
Adjuvant imatinib for 3 years better than 1 year in high-risk GIST
Article Source

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Patients with high-risk GIST treated with adjuvant imatinib for 3 years had longer relapse-free survival (RFS) and overall survival (OS) than did those treated for 1 year.

Major finding: Five-year RFS rates for 3- and 1-year treatment durations were 71.1% and 52.3%, respectively (hazard ratio, 0.60; 95% CI, 0.44-0.81; P less than .001); 5-year OS rates were 91.9% and 85.3%, respectively (HR, 0.60; 95% CI, 0.37-0.97; P = .036).

Data source: After a median follow up of 90 months, a second planned analysis of the open-label SSGXVIII/AIO study evaluated outcomes of 358 patients, 181 in the 12-month group and 177 in the 36-month group.

Disclosures: Dr. Joensuu reported consulting or advisory roles with Blueprint Medicines, ARIAD Pharmaceuticals, and Orion Pharma. Several of his coauthors reported ties to industry sources.

Adjuvant lapatinib added no benefit against head and neck squamous cell carcinoma

Article Type
Changed
Fri, 01/04/2019 - 13:06
Display Headline
Adjuvant lapatinib added no benefit against head and neck squamous cell carcinoma

Lapatinib in combination with platinum-based chemoradiotherapy and as long-term maintenance therapy showed no benefit in patients with surgically treated high-risk squamous cell carcinoma of the head and neck (SCCHN).

No difference was observed between treatment arms in the primary endpoint of disease-free survival (DFS) or the secondary endpoints of investigator-assessed DFS and overall survival.

“Certainly, these findings should serve as a note of caution on the risks of initiating large phase III studies (of this and other drugs) with insufficient evidence of single-agent activity,” wrote Dr. Kevin Harrington of the division of radiotherapy and imaging at the Institute of Cancer Research and Royal Marsden Hospital, London, and his colleagues (Journ Clin Onc. 2015 Nov. 2. doi: 10.1200/JCO.2015.61.4370).

©KGH/Wikimedia Commons/Creative Commons ASA 3.0
This histopathologic image shows well-differentiated squamous cell carcinoma in an excisional biopsy specimen.

The placebo-controlled phase III trial from 84 sites in 21 countries randomized 688 patients with resected SCCHN to receive placebo or lapatinib. The study was halted early because of the apparent plateauing of DFS events at the median follow-up time of 35.3 months, the investigators reported.

DFS events (disease recurrence or death) occurred in 32% of patients who received placebo versus 35% of patients who received lapatinib (HR, 1.10; 95% CI, 0.85-1.43; P = .45). No significant differences in DFS were observed between treatment arms by human papillomavirus or EGFR status.

Lapatinib is a small-molecule inhibitor of epidermal growth factor receptor (EGFR) and human epidermal growth factor receptor 2 (HER2) and was postulated to be active in squamous cell carcinoma of the head and neck tumors because many overexpress EGFR. Lapatinib has shown efficacy in HER2-positive metastatic breast cancer, but not in other EGFR-driven cancers.

Compliance was high in both placebo and lapatinib arms, with 83% and 76%, respectively, achieving greater than 80% compliance. Adverse events of grade 3 or higher were observed in 67% of the placebo arm and 75% of the lapatinib arm. The most common grade 3 or 4 adverse events were lymphopenia and mucosal inflammation.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Lapatinib in combination with platinum-based chemoradiotherapy and as long-term maintenance therapy showed no benefit in patients with surgically treated high-risk squamous cell carcinoma of the head and neck (SCCHN).

No difference was observed between treatment arms in the primary endpoint of disease-free survival (DFS) or the secondary endpoints of investigator-assessed DFS and overall survival.

“Certainly, these findings should serve as a note of caution on the risks of initiating large phase III studies (of this and other drugs) with insufficient evidence of single-agent activity,” wrote Dr. Kevin Harrington of the division of radiotherapy and imaging at the Institute of Cancer Research and Royal Marsden Hospital, London, and his colleagues (Journ Clin Onc. 2015 Nov. 2. doi: 10.1200/JCO.2015.61.4370).

©KGH/Wikimedia Commons/Creative Commons ASA 3.0
This histopathologic image shows well-differentiated squamous cell carcinoma in an excisional biopsy specimen.

The placebo-controlled phase III trial from 84 sites in 21 countries randomized 688 patients with resected SCCHN to receive placebo or lapatinib. The study was halted early because of the apparent plateauing of DFS events at the median follow-up time of 35.3 months, the investigators reported.

DFS events (disease recurrence or death) occurred in 32% of patients who received placebo versus 35% of patients who received lapatinib (HR, 1.10; 95% CI, 0.85-1.43; P = .45). No significant differences in DFS were observed between treatment arms by human papillomavirus or EGFR status.

Lapatinib is a small-molecule inhibitor of epidermal growth factor receptor (EGFR) and human epidermal growth factor receptor 2 (HER2) and was postulated to be active in squamous cell carcinoma of the head and neck tumors because many overexpress EGFR. Lapatinib has shown efficacy in HER2-positive metastatic breast cancer, but not in other EGFR-driven cancers.

Compliance was high in both placebo and lapatinib arms, with 83% and 76%, respectively, achieving greater than 80% compliance. Adverse events of grade 3 or higher were observed in 67% of the placebo arm and 75% of the lapatinib arm. The most common grade 3 or 4 adverse events were lymphopenia and mucosal inflammation.

Lapatinib in combination with platinum-based chemoradiotherapy and as long-term maintenance therapy showed no benefit in patients with surgically treated high-risk squamous cell carcinoma of the head and neck (SCCHN).

No difference was observed between treatment arms in the primary endpoint of disease-free survival (DFS) or the secondary endpoints of investigator-assessed DFS and overall survival.

“Certainly, these findings should serve as a note of caution on the risks of initiating large phase III studies (of this and other drugs) with insufficient evidence of single-agent activity,” wrote Dr. Kevin Harrington of the division of radiotherapy and imaging at the Institute of Cancer Research and Royal Marsden Hospital, London, and his colleagues (Journ Clin Onc. 2015 Nov. 2. doi: 10.1200/JCO.2015.61.4370).

©KGH/Wikimedia Commons/Creative Commons ASA 3.0
This histopathologic image shows well-differentiated squamous cell carcinoma in an excisional biopsy specimen.

The placebo-controlled phase III trial from 84 sites in 21 countries randomized 688 patients with resected SCCHN to receive placebo or lapatinib. The study was halted early because of the apparent plateauing of DFS events at the median follow-up time of 35.3 months, the investigators reported.

DFS events (disease recurrence or death) occurred in 32% of patients who received placebo versus 35% of patients who received lapatinib (HR, 1.10; 95% CI, 0.85-1.43; P = .45). No significant differences in DFS were observed between treatment arms by human papillomavirus or EGFR status.

Lapatinib is a small-molecule inhibitor of epidermal growth factor receptor (EGFR) and human epidermal growth factor receptor 2 (HER2) and was postulated to be active in squamous cell carcinoma of the head and neck tumors because many overexpress EGFR. Lapatinib has shown efficacy in HER2-positive metastatic breast cancer, but not in other EGFR-driven cancers.

Compliance was high in both placebo and lapatinib arms, with 83% and 76%, respectively, achieving greater than 80% compliance. Adverse events of grade 3 or higher were observed in 67% of the placebo arm and 75% of the lapatinib arm. The most common grade 3 or 4 adverse events were lymphopenia and mucosal inflammation.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Adjuvant lapatinib added no benefit against head and neck squamous cell carcinoma
Display Headline
Adjuvant lapatinib added no benefit against head and neck squamous cell carcinoma
Article Source

PURLs Copyright

Inside the Article

Vitals

Key clinical point: After surgery for high-risk squamous cell carcinoma of the head and neck, the addition of lapatinib to chemoradiotherapy and as long-term maintenance therapy offered no benefit.

Major finding: Disease-free survival events occurred in 32% of patients who received placebo versus 35% of patients who received lapatinib (HR, 1.10; 95% CI, 0.85-1.43; P = .45).

Data source: A phase III trial from 84 sites in 21 countries randomizing 688 patients with resected SCCHN to receive placebo or lapatinib.

Disclosures: Dr. Harrington reported financial ties to Merck Sharp & Dohme, Oncos Therapeutics, Cellgene, Viralytics, Lytix, Oncolytics Biotech, Genelux, and AstraZeneca. Several of his coauthors reported ties to industry sources.

Severe acne patients stay on antibiotics too long

Article Type
Changed
Fri, 01/18/2019 - 15:22
Display Headline
Severe acne patients stay on antibiotics too long

Patients with severe acne often remained on antibiotics longer than recommended before beginning treatment with isotretinoin, in a retrospective chart review of patients treated for acne at New York University.

The medical records analysis of 137 patients with severe acne who eventually received isotretinoin found that the average duration of antibiotic use in these patients was 331.3 days, far exceeding expert recommendations to limit use to 3 months, reported Dr. Arielle R. Nagler and her associates in the department of dermatology, New York University.

©rasslava/thinkstockphotos.com

In total, 15.3% of patients in the study were treated with antibiotics for 3 months or less, and 64.2% were treated with antibiotics for 6 months or longer. Almost 34% were treated with antibiotics for 1 year or longer.

The mean time elapsed between the first recorded mention of possible isotretinoin use and actual initiation of treatment was 155.8 days, Dr. Nagler and her colleagues reported. The mean age of initiation of isotretinoin was 19.6 years.

“Prolonged courses of systemic antibiotics are discouraged for several reasons including increasing resistance of P. acnes [Propionibacterium acnes] to antibiotics,” the authors wrote. “Courses 6 months or longer are highly likely to induce resistance.”

Patients who received antibiotic treatment only at the study site took antibiotics for a mean duration of 283.1 days, whereas those treated at multiple institutions had a mean duration of 380.2 days, they added (P = .054).

“Dermatologists should be aware that patients presenting to them who have been cared for by other providers are at particular risk for extended courses of antibiotics,” they said.

To help reduce unnecessary antibiotic use, providers should recognize patients who are not improving after 6-8 weeks, and consider starting isotretinoin therapy “at an earlier time point, especially in those with severe acne,” Dr. Nagler and her coauthors advised.

Read the full report in the Journal of the American Academy of Dermatology.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Patients with severe acne often remained on antibiotics longer than recommended before beginning treatment with isotretinoin, in a retrospective chart review of patients treated for acne at New York University.

The medical records analysis of 137 patients with severe acne who eventually received isotretinoin found that the average duration of antibiotic use in these patients was 331.3 days, far exceeding expert recommendations to limit use to 3 months, reported Dr. Arielle R. Nagler and her associates in the department of dermatology, New York University.

©rasslava/thinkstockphotos.com

In total, 15.3% of patients in the study were treated with antibiotics for 3 months or less, and 64.2% were treated with antibiotics for 6 months or longer. Almost 34% were treated with antibiotics for 1 year or longer.

The mean time elapsed between the first recorded mention of possible isotretinoin use and actual initiation of treatment was 155.8 days, Dr. Nagler and her colleagues reported. The mean age of initiation of isotretinoin was 19.6 years.

“Prolonged courses of systemic antibiotics are discouraged for several reasons including increasing resistance of P. acnes [Propionibacterium acnes] to antibiotics,” the authors wrote. “Courses 6 months or longer are highly likely to induce resistance.”

Patients who received antibiotic treatment only at the study site took antibiotics for a mean duration of 283.1 days, whereas those treated at multiple institutions had a mean duration of 380.2 days, they added (P = .054).

“Dermatologists should be aware that patients presenting to them who have been cared for by other providers are at particular risk for extended courses of antibiotics,” they said.

To help reduce unnecessary antibiotic use, providers should recognize patients who are not improving after 6-8 weeks, and consider starting isotretinoin therapy “at an earlier time point, especially in those with severe acne,” Dr. Nagler and her coauthors advised.

Read the full report in the Journal of the American Academy of Dermatology.

Patients with severe acne often remained on antibiotics longer than recommended before beginning treatment with isotretinoin, in a retrospective chart review of patients treated for acne at New York University.

The medical records analysis of 137 patients with severe acne who eventually received isotretinoin found that the average duration of antibiotic use in these patients was 331.3 days, far exceeding expert recommendations to limit use to 3 months, reported Dr. Arielle R. Nagler and her associates in the department of dermatology, New York University.

©rasslava/thinkstockphotos.com

In total, 15.3% of patients in the study were treated with antibiotics for 3 months or less, and 64.2% were treated with antibiotics for 6 months or longer. Almost 34% were treated with antibiotics for 1 year or longer.

The mean time elapsed between the first recorded mention of possible isotretinoin use and actual initiation of treatment was 155.8 days, Dr. Nagler and her colleagues reported. The mean age of initiation of isotretinoin was 19.6 years.

“Prolonged courses of systemic antibiotics are discouraged for several reasons including increasing resistance of P. acnes [Propionibacterium acnes] to antibiotics,” the authors wrote. “Courses 6 months or longer are highly likely to induce resistance.”

Patients who received antibiotic treatment only at the study site took antibiotics for a mean duration of 283.1 days, whereas those treated at multiple institutions had a mean duration of 380.2 days, they added (P = .054).

“Dermatologists should be aware that patients presenting to them who have been cared for by other providers are at particular risk for extended courses of antibiotics,” they said.

To help reduce unnecessary antibiotic use, providers should recognize patients who are not improving after 6-8 weeks, and consider starting isotretinoin therapy “at an earlier time point, especially in those with severe acne,” Dr. Nagler and her coauthors advised.

Read the full report in the Journal of the American Academy of Dermatology.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Severe acne patients stay on antibiotics too long
Display Headline
Severe acne patients stay on antibiotics too long
Article Source

PURLs Copyright

Inside the Article