User login
Time to Meds Matters for Patients with Cardiac Arrest Due to Nonshockable Rhythms
Clinical question: Is earlier administration of epinephrine in patients with cardiac arrest due to nonshockable rhythms associated with increased return of spontaneous circulation, survival, and neurologically intact survival?
Background: About 200,000 hospitalized patients in the U.S. have a cardiac arrest, commonly due to nonshockable rhythms. Cardiopulmonary resuscitation has been the only efficacious intervention. There are no well-controlled trials of the use of epinephrine on survival and neurological outcomes.
Study design: Prospective cohort from a large multicenter registry of in-hospital cardiac arrests.
Setting: Data from 570 hospitals from 2000 to 2009.
Synopsis: Authors included 25,095 adults from 570 hospitals who had cardiac arrests in hospital with asystole or pulseless electrical activity as the initial rhythm. Time to first administration of epinephrine was recorded and then separated into quartiles, and odds ratios were evaluated using one to three minutes as the reference group. Outcomes of survival to hospital discharge (10%), return of spontaneous circulation (47%), and survival to hospital discharge with favorable neurologic status (7%) were assessed.
Survival to discharge decreased as the time to administration of the first dose of epinephrine increased. Of those patients receiving epinephrine in one minute, 12% survived. This dropped to 7% for those first receiving epinephrine after seven minutes. Return of spontaneous circulation and survival to discharge with favorable neurologic status showed a similar stepwise decrease with longer times to first administration of epinephrine.
Bottom line: Earlier administration of epinephrine to patients with cardiac arrest due to nonshockable rhythms is associated with improved survival to discharge, return of spontaneous circulation, and neurologically intact survival.
Citation: Donnino MW, Salciccioli JD, Howell MD, et al. Time to administration of epinephrine and outcome after in-hospital cardiac arrest with non-shockable rhythms: restrospective analysis of large in-hospital data registry. BMJ. 2014;348:g3028.
Clinical question: Is earlier administration of epinephrine in patients with cardiac arrest due to nonshockable rhythms associated with increased return of spontaneous circulation, survival, and neurologically intact survival?
Background: About 200,000 hospitalized patients in the U.S. have a cardiac arrest, commonly due to nonshockable rhythms. Cardiopulmonary resuscitation has been the only efficacious intervention. There are no well-controlled trials of the use of epinephrine on survival and neurological outcomes.
Study design: Prospective cohort from a large multicenter registry of in-hospital cardiac arrests.
Setting: Data from 570 hospitals from 2000 to 2009.
Synopsis: Authors included 25,095 adults from 570 hospitals who had cardiac arrests in hospital with asystole or pulseless electrical activity as the initial rhythm. Time to first administration of epinephrine was recorded and then separated into quartiles, and odds ratios were evaluated using one to three minutes as the reference group. Outcomes of survival to hospital discharge (10%), return of spontaneous circulation (47%), and survival to hospital discharge with favorable neurologic status (7%) were assessed.
Survival to discharge decreased as the time to administration of the first dose of epinephrine increased. Of those patients receiving epinephrine in one minute, 12% survived. This dropped to 7% for those first receiving epinephrine after seven minutes. Return of spontaneous circulation and survival to discharge with favorable neurologic status showed a similar stepwise decrease with longer times to first administration of epinephrine.
Bottom line: Earlier administration of epinephrine to patients with cardiac arrest due to nonshockable rhythms is associated with improved survival to discharge, return of spontaneous circulation, and neurologically intact survival.
Citation: Donnino MW, Salciccioli JD, Howell MD, et al. Time to administration of epinephrine and outcome after in-hospital cardiac arrest with non-shockable rhythms: restrospective analysis of large in-hospital data registry. BMJ. 2014;348:g3028.
Clinical question: Is earlier administration of epinephrine in patients with cardiac arrest due to nonshockable rhythms associated with increased return of spontaneous circulation, survival, and neurologically intact survival?
Background: About 200,000 hospitalized patients in the U.S. have a cardiac arrest, commonly due to nonshockable rhythms. Cardiopulmonary resuscitation has been the only efficacious intervention. There are no well-controlled trials of the use of epinephrine on survival and neurological outcomes.
Study design: Prospective cohort from a large multicenter registry of in-hospital cardiac arrests.
Setting: Data from 570 hospitals from 2000 to 2009.
Synopsis: Authors included 25,095 adults from 570 hospitals who had cardiac arrests in hospital with asystole or pulseless electrical activity as the initial rhythm. Time to first administration of epinephrine was recorded and then separated into quartiles, and odds ratios were evaluated using one to three minutes as the reference group. Outcomes of survival to hospital discharge (10%), return of spontaneous circulation (47%), and survival to hospital discharge with favorable neurologic status (7%) were assessed.
Survival to discharge decreased as the time to administration of the first dose of epinephrine increased. Of those patients receiving epinephrine in one minute, 12% survived. This dropped to 7% for those first receiving epinephrine after seven minutes. Return of spontaneous circulation and survival to discharge with favorable neurologic status showed a similar stepwise decrease with longer times to first administration of epinephrine.
Bottom line: Earlier administration of epinephrine to patients with cardiac arrest due to nonshockable rhythms is associated with improved survival to discharge, return of spontaneous circulation, and neurologically intact survival.
Citation: Donnino MW, Salciccioli JD, Howell MD, et al. Time to administration of epinephrine and outcome after in-hospital cardiac arrest with non-shockable rhythms: restrospective analysis of large in-hospital data registry. BMJ. 2014;348:g3028.
Frailty Indices Tool Predicts Post-Operative Complications, Mortality after Elective Surgery in Geriatric Patients
Clinical question: Is there a more accurate way to predict adverse post-operative outcomes in geriatric patients undergoing elective surgery?
Background: More than half of all operations in the U.S. involve geriatric patients. Most tools hospitalists use to predict post-operative outcomes are focused on cardiovascular events and do not account for frailty. Common in geriatric patients, frailty is thought to influence post-operative outcomes.
Study design: Prospective cohort study.
Setting: A 1,000-bed academic hospital in Seoul, South Korea.
Synopsis: A cohort of 275 elderly patients (>64 years old) who were scheduled for elective intermediate or high-risk surgery underwent a pre-operative comprehensive geriatric assessment (CGA) that included measures of frailty. This cohort was then followed for mortality, major post-operative complications (pneumonia, urinary infection, pulmonary embolism, and unplanned transfer to intensive care), length of stay, and transfer to a nursing home. Post-operative complications, transfer to a nursing facility, and one-year mortality were associated with a derived scoring tool that included the Charlson Comorbidity Index, activities of daily living (ADL), instrumental activities of daily living (IADL), dementia, risk for delirium, mid-arm circumference, and a mini-nutritional assessment.
This tool was more accurate at predicting one-year mortality than the American Society of Anesthesiologists (ASA) classification.
Bottom line: This study establishes that measures of frailty predict post-operative outcomes in geriatric patients undergoing elective surgery; however, the authors’ tool depends on CGA, which is time-consuming, cumbersome, and depends on indices not familiar to many hospitalists.
Citation: Kim SW, Han HS, Jung HW, et al. Multidimensional frailty scores for the prediction of postoperative mortality risk. JAMA Surg. 2014;149(7):633-640.
Clinical question: Is there a more accurate way to predict adverse post-operative outcomes in geriatric patients undergoing elective surgery?
Background: More than half of all operations in the U.S. involve geriatric patients. Most tools hospitalists use to predict post-operative outcomes are focused on cardiovascular events and do not account for frailty. Common in geriatric patients, frailty is thought to influence post-operative outcomes.
Study design: Prospective cohort study.
Setting: A 1,000-bed academic hospital in Seoul, South Korea.
Synopsis: A cohort of 275 elderly patients (>64 years old) who were scheduled for elective intermediate or high-risk surgery underwent a pre-operative comprehensive geriatric assessment (CGA) that included measures of frailty. This cohort was then followed for mortality, major post-operative complications (pneumonia, urinary infection, pulmonary embolism, and unplanned transfer to intensive care), length of stay, and transfer to a nursing home. Post-operative complications, transfer to a nursing facility, and one-year mortality were associated with a derived scoring tool that included the Charlson Comorbidity Index, activities of daily living (ADL), instrumental activities of daily living (IADL), dementia, risk for delirium, mid-arm circumference, and a mini-nutritional assessment.
This tool was more accurate at predicting one-year mortality than the American Society of Anesthesiologists (ASA) classification.
Bottom line: This study establishes that measures of frailty predict post-operative outcomes in geriatric patients undergoing elective surgery; however, the authors’ tool depends on CGA, which is time-consuming, cumbersome, and depends on indices not familiar to many hospitalists.
Citation: Kim SW, Han HS, Jung HW, et al. Multidimensional frailty scores for the prediction of postoperative mortality risk. JAMA Surg. 2014;149(7):633-640.
Clinical question: Is there a more accurate way to predict adverse post-operative outcomes in geriatric patients undergoing elective surgery?
Background: More than half of all operations in the U.S. involve geriatric patients. Most tools hospitalists use to predict post-operative outcomes are focused on cardiovascular events and do not account for frailty. Common in geriatric patients, frailty is thought to influence post-operative outcomes.
Study design: Prospective cohort study.
Setting: A 1,000-bed academic hospital in Seoul, South Korea.
Synopsis: A cohort of 275 elderly patients (>64 years old) who were scheduled for elective intermediate or high-risk surgery underwent a pre-operative comprehensive geriatric assessment (CGA) that included measures of frailty. This cohort was then followed for mortality, major post-operative complications (pneumonia, urinary infection, pulmonary embolism, and unplanned transfer to intensive care), length of stay, and transfer to a nursing home. Post-operative complications, transfer to a nursing facility, and one-year mortality were associated with a derived scoring tool that included the Charlson Comorbidity Index, activities of daily living (ADL), instrumental activities of daily living (IADL), dementia, risk for delirium, mid-arm circumference, and a mini-nutritional assessment.
This tool was more accurate at predicting one-year mortality than the American Society of Anesthesiologists (ASA) classification.
Bottom line: This study establishes that measures of frailty predict post-operative outcomes in geriatric patients undergoing elective surgery; however, the authors’ tool depends on CGA, which is time-consuming, cumbersome, and depends on indices not familiar to many hospitalists.
Citation: Kim SW, Han HS, Jung HW, et al. Multidimensional frailty scores for the prediction of postoperative mortality risk. JAMA Surg. 2014;149(7):633-640.
Pre-Operative Use of Angiotension Converting Enzyme Inhibitors, Angiotension Receptor Blockers Examined in Elective Joint Replacement Surgery
Clinical question: Should angiotension-converting enzyme inhibitors or angiotension receptor blockers (ACEI/ARB) be held the morning of elective joint replacement?
Background: In patients taking ACEI/ARB, the decision regarding whether or not to give these medications on the day of surgery is controversial. UptoDate recommends holding ACEI/ARB the day of surgery; American College of Physicians Guidelines and SHM Consult Medicine recommend giving these drugs on the day of surgery.
Study design: Retrospective cohort (case control) study.
Setting: A large academic hospital in Pennsylvania.
Synopsis: Researchers studied adults undergoing elective spinal fusion, total knee replacement, or total hip replacement, and compared outcomes in 323 patients who were taking an ACEI/ARB (study group) to outcomes in the 579 patients who were not taking an ACEI/ARB (control group) before surgery. It was assumed—but not studied—that the ACEI/ARB was continued the morning of surgery in all patients in the study group, because that was the standard practice at this hospital.
Compared to the control group, the study group had more post-induction hypotension (12.2% vs. 6.7%) and more post-operative acute kidney injury (5.76% vs. 3.28%). Patients who developed acute kidney injury had longer length of stay (5.76 vs. 3.28 days) but no difference in two-year mortality.
Patients in the study group had higher baseline creatinine, were older, were more likely to be taking a diuretic, and were more likely to have diabetes, heart failure, and coronary artery disease. The authors used multiple logistic regression to adjust for these differences. Anesthesia and intra-operative fluid management were not standardized or compared.
Bottom line: ACEI/ARB administration on the morning of elective major orthopedic surgery is likely associated with a higher risk of intra-operative hypotension and acute kidney injury.
Citation: Nielson E, Hennrikus E, Lehman E, Mets B. Angiotensin axis blockade, hypotension, and acute kidney injury in elective major orthopedic surgery. J Hosp Med. 2014;9(5):283-288.
Clinical question: Should angiotension-converting enzyme inhibitors or angiotension receptor blockers (ACEI/ARB) be held the morning of elective joint replacement?
Background: In patients taking ACEI/ARB, the decision regarding whether or not to give these medications on the day of surgery is controversial. UptoDate recommends holding ACEI/ARB the day of surgery; American College of Physicians Guidelines and SHM Consult Medicine recommend giving these drugs on the day of surgery.
Study design: Retrospective cohort (case control) study.
Setting: A large academic hospital in Pennsylvania.
Synopsis: Researchers studied adults undergoing elective spinal fusion, total knee replacement, or total hip replacement, and compared outcomes in 323 patients who were taking an ACEI/ARB (study group) to outcomes in the 579 patients who were not taking an ACEI/ARB (control group) before surgery. It was assumed—but not studied—that the ACEI/ARB was continued the morning of surgery in all patients in the study group, because that was the standard practice at this hospital.
Compared to the control group, the study group had more post-induction hypotension (12.2% vs. 6.7%) and more post-operative acute kidney injury (5.76% vs. 3.28%). Patients who developed acute kidney injury had longer length of stay (5.76 vs. 3.28 days) but no difference in two-year mortality.
Patients in the study group had higher baseline creatinine, were older, were more likely to be taking a diuretic, and were more likely to have diabetes, heart failure, and coronary artery disease. The authors used multiple logistic regression to adjust for these differences. Anesthesia and intra-operative fluid management were not standardized or compared.
Bottom line: ACEI/ARB administration on the morning of elective major orthopedic surgery is likely associated with a higher risk of intra-operative hypotension and acute kidney injury.
Citation: Nielson E, Hennrikus E, Lehman E, Mets B. Angiotensin axis blockade, hypotension, and acute kidney injury in elective major orthopedic surgery. J Hosp Med. 2014;9(5):283-288.
Clinical question: Should angiotension-converting enzyme inhibitors or angiotension receptor blockers (ACEI/ARB) be held the morning of elective joint replacement?
Background: In patients taking ACEI/ARB, the decision regarding whether or not to give these medications on the day of surgery is controversial. UptoDate recommends holding ACEI/ARB the day of surgery; American College of Physicians Guidelines and SHM Consult Medicine recommend giving these drugs on the day of surgery.
Study design: Retrospective cohort (case control) study.
Setting: A large academic hospital in Pennsylvania.
Synopsis: Researchers studied adults undergoing elective spinal fusion, total knee replacement, or total hip replacement, and compared outcomes in 323 patients who were taking an ACEI/ARB (study group) to outcomes in the 579 patients who were not taking an ACEI/ARB (control group) before surgery. It was assumed—but not studied—that the ACEI/ARB was continued the morning of surgery in all patients in the study group, because that was the standard practice at this hospital.
Compared to the control group, the study group had more post-induction hypotension (12.2% vs. 6.7%) and more post-operative acute kidney injury (5.76% vs. 3.28%). Patients who developed acute kidney injury had longer length of stay (5.76 vs. 3.28 days) but no difference in two-year mortality.
Patients in the study group had higher baseline creatinine, were older, were more likely to be taking a diuretic, and were more likely to have diabetes, heart failure, and coronary artery disease. The authors used multiple logistic regression to adjust for these differences. Anesthesia and intra-operative fluid management were not standardized or compared.
Bottom line: ACEI/ARB administration on the morning of elective major orthopedic surgery is likely associated with a higher risk of intra-operative hypotension and acute kidney injury.
Citation: Nielson E, Hennrikus E, Lehman E, Mets B. Angiotensin axis blockade, hypotension, and acute kidney injury in elective major orthopedic surgery. J Hosp Med. 2014;9(5):283-288.
No Mortality Difference Associated with Pre-Operative Beta Blocker Use for Coronary Artery Bypass Grafting Without Recent Myocardial Infarction
Clinical question: Is the use of beta blockers within 24 hours of coronary artery bypass grafting (CABG) surgery without recent myocardial infarction (MI) associated with decreased peri-operative mortality?
Background: Several retrospective observational studies suggest a reduction in peri-operative mortality with CABG surgery if beta blockers are administered prior to surgery. Although the use of beta blockers pre-operatively for CABG is now a quality measure, the use of pre-operative beta blockers is still controversial due to the results of more recent studies, with the observed benefit thought to be driven mainly by patients with recent MI.
Study design: Retrospective cohort analysis.
Setting: More than 1,100 U.S. hospitals.
Synopsis: The Society of Thoracic Surgeons’ National Adult Cardiac Surgery database identified 506,110 adult patients (without MI within 21 days) nonemergently undergoing CABG surgery. Beta blocker use was defined as receiving a beta blocker within 24 hours before surgery. Although most patients (86%) received beta blockers prior to surgery, there was no significant difference in operative mortality, permanent stroke, prolonged ventilation, and renal failure between patients receiving beta blockers and those who did not, although atrial fibrillation (Afib) was more common with pre-operative beta blocker use.
Bottom line: For patients undergoing nonemergent CABG surgery without recent MI, pre-operative beta blocker use is not associated with improved outcomes and is associated with slightly higher rates of Afib.
Citation: Brinkman W, Herbert MA, O’Brien S, et al. Preoperative beta-blocker use in coronary artery bypass grafting surgery: national database analysis. JAMA Intern Med. 2014;174(8):1320-1327.
Clinical question: Is the use of beta blockers within 24 hours of coronary artery bypass grafting (CABG) surgery without recent myocardial infarction (MI) associated with decreased peri-operative mortality?
Background: Several retrospective observational studies suggest a reduction in peri-operative mortality with CABG surgery if beta blockers are administered prior to surgery. Although the use of beta blockers pre-operatively for CABG is now a quality measure, the use of pre-operative beta blockers is still controversial due to the results of more recent studies, with the observed benefit thought to be driven mainly by patients with recent MI.
Study design: Retrospective cohort analysis.
Setting: More than 1,100 U.S. hospitals.
Synopsis: The Society of Thoracic Surgeons’ National Adult Cardiac Surgery database identified 506,110 adult patients (without MI within 21 days) nonemergently undergoing CABG surgery. Beta blocker use was defined as receiving a beta blocker within 24 hours before surgery. Although most patients (86%) received beta blockers prior to surgery, there was no significant difference in operative mortality, permanent stroke, prolonged ventilation, and renal failure between patients receiving beta blockers and those who did not, although atrial fibrillation (Afib) was more common with pre-operative beta blocker use.
Bottom line: For patients undergoing nonemergent CABG surgery without recent MI, pre-operative beta blocker use is not associated with improved outcomes and is associated with slightly higher rates of Afib.
Citation: Brinkman W, Herbert MA, O’Brien S, et al. Preoperative beta-blocker use in coronary artery bypass grafting surgery: national database analysis. JAMA Intern Med. 2014;174(8):1320-1327.
Clinical question: Is the use of beta blockers within 24 hours of coronary artery bypass grafting (CABG) surgery without recent myocardial infarction (MI) associated with decreased peri-operative mortality?
Background: Several retrospective observational studies suggest a reduction in peri-operative mortality with CABG surgery if beta blockers are administered prior to surgery. Although the use of beta blockers pre-operatively for CABG is now a quality measure, the use of pre-operative beta blockers is still controversial due to the results of more recent studies, with the observed benefit thought to be driven mainly by patients with recent MI.
Study design: Retrospective cohort analysis.
Setting: More than 1,100 U.S. hospitals.
Synopsis: The Society of Thoracic Surgeons’ National Adult Cardiac Surgery database identified 506,110 adult patients (without MI within 21 days) nonemergently undergoing CABG surgery. Beta blocker use was defined as receiving a beta blocker within 24 hours before surgery. Although most patients (86%) received beta blockers prior to surgery, there was no significant difference in operative mortality, permanent stroke, prolonged ventilation, and renal failure between patients receiving beta blockers and those who did not, although atrial fibrillation (Afib) was more common with pre-operative beta blocker use.
Bottom line: For patients undergoing nonemergent CABG surgery without recent MI, pre-operative beta blocker use is not associated with improved outcomes and is associated with slightly higher rates of Afib.
Citation: Brinkman W, Herbert MA, O’Brien S, et al. Preoperative beta-blocker use in coronary artery bypass grafting surgery: national database analysis. JAMA Intern Med. 2014;174(8):1320-1327.
Delirium Severity Scoring System CAM-S Correlates with Length of Stay, Mortality
Clinical question: Does the CAM-S, a modified version of the Confusion Assessment Method (CAM), which measures delirium severity, correlate with clinical outcomes?
Background: In 1990, Dr. Sharon Inouye developed the CAM, which is a common, standard measure to identify the presence of delirium. Although other scoring systems exist to quantify delirium severity, Dr. Inouye proposes an extension of the CAM (CAM-S) to measure delirium severity.
Study design: Validation analysis.
Setting: Three academic medical centers in the U.S.
Synopsis: Two validation cohorts of patients 70 years or older without dementia and moderate-to-high-risk of developing delirium during hospitalization were studied. The first cohort comprised 300 patients scheduled for elective, noncardiac surgery; the second cohort was made up of 250 patients admitted to an inpatient medical service. The CAM-S uses the same items as the original CAM and rates each symptom 0 for absent, 1 for mild, or 2 for marked; acute onset of fluctuation receives 0 (absent) or 1 (present). Higher CAM-S scores appear to correlate with various outcome measures, including increased length of stay, new nursing home placement, and 90-day mortality.
Bottom line: Higher scores on the CAM-S, a scoring system based on the CAM and designed to measure delirium severity, are associated with worse in-hospital and post-discharge outcomes.
Citation: Inouye SK, Kosar CM, Tommet D, et al. The CAM-S: development and validation of a new scoring system for delirium severity in 2 cohorts. Ann Intern Med. 2014;160(8):526-533.
Clinical question: Does the CAM-S, a modified version of the Confusion Assessment Method (CAM), which measures delirium severity, correlate with clinical outcomes?
Background: In 1990, Dr. Sharon Inouye developed the CAM, which is a common, standard measure to identify the presence of delirium. Although other scoring systems exist to quantify delirium severity, Dr. Inouye proposes an extension of the CAM (CAM-S) to measure delirium severity.
Study design: Validation analysis.
Setting: Three academic medical centers in the U.S.
Synopsis: Two validation cohorts of patients 70 years or older without dementia and moderate-to-high-risk of developing delirium during hospitalization were studied. The first cohort comprised 300 patients scheduled for elective, noncardiac surgery; the second cohort was made up of 250 patients admitted to an inpatient medical service. The CAM-S uses the same items as the original CAM and rates each symptom 0 for absent, 1 for mild, or 2 for marked; acute onset of fluctuation receives 0 (absent) or 1 (present). Higher CAM-S scores appear to correlate with various outcome measures, including increased length of stay, new nursing home placement, and 90-day mortality.
Bottom line: Higher scores on the CAM-S, a scoring system based on the CAM and designed to measure delirium severity, are associated with worse in-hospital and post-discharge outcomes.
Citation: Inouye SK, Kosar CM, Tommet D, et al. The CAM-S: development and validation of a new scoring system for delirium severity in 2 cohorts. Ann Intern Med. 2014;160(8):526-533.
Clinical question: Does the CAM-S, a modified version of the Confusion Assessment Method (CAM), which measures delirium severity, correlate with clinical outcomes?
Background: In 1990, Dr. Sharon Inouye developed the CAM, which is a common, standard measure to identify the presence of delirium. Although other scoring systems exist to quantify delirium severity, Dr. Inouye proposes an extension of the CAM (CAM-S) to measure delirium severity.
Study design: Validation analysis.
Setting: Three academic medical centers in the U.S.
Synopsis: Two validation cohorts of patients 70 years or older without dementia and moderate-to-high-risk of developing delirium during hospitalization were studied. The first cohort comprised 300 patients scheduled for elective, noncardiac surgery; the second cohort was made up of 250 patients admitted to an inpatient medical service. The CAM-S uses the same items as the original CAM and rates each symptom 0 for absent, 1 for mild, or 2 for marked; acute onset of fluctuation receives 0 (absent) or 1 (present). Higher CAM-S scores appear to correlate with various outcome measures, including increased length of stay, new nursing home placement, and 90-day mortality.
Bottom line: Higher scores on the CAM-S, a scoring system based on the CAM and designed to measure delirium severity, are associated with worse in-hospital and post-discharge outcomes.
Citation: Inouye SK, Kosar CM, Tommet D, et al. The CAM-S: development and validation of a new scoring system for delirium severity in 2 cohorts. Ann Intern Med. 2014;160(8):526-533.
Thrombolytics in Pulmonary Embolism Associated with Lower Mortality, Increased Bleeding
Clinical question: What are the mortality benefits and bleeding risks associated with thrombolytic therapy, compared with other anticoagulants, in pulmonary embolism (PE)?
Background: Thrombolytics are not routinely administered for PE but can be considered in patients with hemodynamic instability with massive PE and those not responding to anticoagulation.
Study design: Meta-analysis.
Setting: Sixteen randomized clinical trials (RCTs) occurring in a variety of settings.
Synopsis: Trials involving 2,115 patients (thrombolytic therapy cohort 1,061; anticoagulation cohort 1,054) with PE were studied, with special attention given to those patients with intermediate risk PEs defined by subclinical cardiovascular compromise. Thrombolytics were compared with low molecular weight heparin, unfractionated heparin, vitamin K antagonists, and fondaparinux. The primary outcomes were all-cause mortality and major bleeding. Secondary outcomes included risk of recurrence of the PE and intracranial hemorrhage.
Thrombolytic therapy was associated with lower all-cause mortality and with higher risk of bleeding. There was a 9.24% rate of major bleeding in the thrombolytic therapy cohort and a 3.42% rate in the anticoagulation cohort. Intracranial hemorrhage was greater in the thrombolytic therapy cohort (1.46% vs. 0.19%). Patients with intermediate risk PE had greater major bleeding rate (7.74% vs. 2.25%) and lower mortality (1.39% vs. 2.92%) with thrombolytics compared to anticoagulation. A net clinical benefit calculation (mortality benefit accounting for intracranial hemorrhage risk) was performed and demonstrated a net clinical benefit of 0.81% (95% CI, 0.65%-1.01%) for those patients who received thrombolytics versus other anticoagulation.
Bottom line: This study suggested a mortality benefit of thrombolytics overall, including those patients with intermediate risk PE.
Citation: Chatterjee S, Chakraborty A, Weinberg I, et al. Thrombolysis for pulmonary embolism and risk of all-cause mortality, major bleeding, and intracranial hemorrhage: a meta-analysis. JAMA. 2014;311(23):2414-2421.
Clinical question: What are the mortality benefits and bleeding risks associated with thrombolytic therapy, compared with other anticoagulants, in pulmonary embolism (PE)?
Background: Thrombolytics are not routinely administered for PE but can be considered in patients with hemodynamic instability with massive PE and those not responding to anticoagulation.
Study design: Meta-analysis.
Setting: Sixteen randomized clinical trials (RCTs) occurring in a variety of settings.
Synopsis: Trials involving 2,115 patients (thrombolytic therapy cohort 1,061; anticoagulation cohort 1,054) with PE were studied, with special attention given to those patients with intermediate risk PEs defined by subclinical cardiovascular compromise. Thrombolytics were compared with low molecular weight heparin, unfractionated heparin, vitamin K antagonists, and fondaparinux. The primary outcomes were all-cause mortality and major bleeding. Secondary outcomes included risk of recurrence of the PE and intracranial hemorrhage.
Thrombolytic therapy was associated with lower all-cause mortality and with higher risk of bleeding. There was a 9.24% rate of major bleeding in the thrombolytic therapy cohort and a 3.42% rate in the anticoagulation cohort. Intracranial hemorrhage was greater in the thrombolytic therapy cohort (1.46% vs. 0.19%). Patients with intermediate risk PE had greater major bleeding rate (7.74% vs. 2.25%) and lower mortality (1.39% vs. 2.92%) with thrombolytics compared to anticoagulation. A net clinical benefit calculation (mortality benefit accounting for intracranial hemorrhage risk) was performed and demonstrated a net clinical benefit of 0.81% (95% CI, 0.65%-1.01%) for those patients who received thrombolytics versus other anticoagulation.
Bottom line: This study suggested a mortality benefit of thrombolytics overall, including those patients with intermediate risk PE.
Citation: Chatterjee S, Chakraborty A, Weinberg I, et al. Thrombolysis for pulmonary embolism and risk of all-cause mortality, major bleeding, and intracranial hemorrhage: a meta-analysis. JAMA. 2014;311(23):2414-2421.
Clinical question: What are the mortality benefits and bleeding risks associated with thrombolytic therapy, compared with other anticoagulants, in pulmonary embolism (PE)?
Background: Thrombolytics are not routinely administered for PE but can be considered in patients with hemodynamic instability with massive PE and those not responding to anticoagulation.
Study design: Meta-analysis.
Setting: Sixteen randomized clinical trials (RCTs) occurring in a variety of settings.
Synopsis: Trials involving 2,115 patients (thrombolytic therapy cohort 1,061; anticoagulation cohort 1,054) with PE were studied, with special attention given to those patients with intermediate risk PEs defined by subclinical cardiovascular compromise. Thrombolytics were compared with low molecular weight heparin, unfractionated heparin, vitamin K antagonists, and fondaparinux. The primary outcomes were all-cause mortality and major bleeding. Secondary outcomes included risk of recurrence of the PE and intracranial hemorrhage.
Thrombolytic therapy was associated with lower all-cause mortality and with higher risk of bleeding. There was a 9.24% rate of major bleeding in the thrombolytic therapy cohort and a 3.42% rate in the anticoagulation cohort. Intracranial hemorrhage was greater in the thrombolytic therapy cohort (1.46% vs. 0.19%). Patients with intermediate risk PE had greater major bleeding rate (7.74% vs. 2.25%) and lower mortality (1.39% vs. 2.92%) with thrombolytics compared to anticoagulation. A net clinical benefit calculation (mortality benefit accounting for intracranial hemorrhage risk) was performed and demonstrated a net clinical benefit of 0.81% (95% CI, 0.65%-1.01%) for those patients who received thrombolytics versus other anticoagulation.
Bottom line: This study suggested a mortality benefit of thrombolytics overall, including those patients with intermediate risk PE.
Citation: Chatterjee S, Chakraborty A, Weinberg I, et al. Thrombolysis for pulmonary embolism and risk of all-cause mortality, major bleeding, and intracranial hemorrhage: a meta-analysis. JAMA. 2014;311(23):2414-2421.
Interventions Effective in Preventing Hospital Readmissions
Clinical question: Which interventions are most effective to prevent 30-day readmissions in medical or surgical patients?
Background: Preventing early readmissions has become a national priority. This study set out to determine which intervention had the largest impact on the prevention of early readmission.
Study design: Meta-analysis.
Setting: Forty-seven studies in multiple locations.
Synopsis: This study evaluated 47 randomized trials that assessed the effectiveness of peri-discharge interventions on the risk of all-cause or unplanned 30-day readmissions for medical and surgical patients. Outcomes included unplanned readmissions, all-cause readmissions, and a composite of unplanned and all-cause readmissions plus out-of-hospital deaths.
The included studies reported up to seven methods of preventing readmissions, including involvement of case management, home visits, education of patients, and self-care support. In 42 trials reporting readmission rates, the pooled relative risk of readmission was 0.82 (95 % CI, 0.73-0.91; P<0.001) within 30 days.
Multiple subgroup analyses noted that the most effective interventions on hospital readmission were those that were more complex and those that sought to augment patient capacity to access and enact dependable post-discharge care.
Limitations included single-center academic studies, lack of standard for dealing with missing data, existence of publication bias, and differing methods used to evaluate intervention effects.
Bottom line: This study was the largest of its kind, to date, and suggests that the interventions analyzed in this study, although complex (e.g. enhancing capacity for self-care at home), were efficacious in reducing 30-day readmissions.
Citation: Leppin AL, Gionfriddo MR, Kessler M, et al. Preventing 30-day hospital readmissions: a systematic review and meta-analysis of randomized trials. JAMA Intern Med. 2014;174(7):1095-1107.
Clinical question: Which interventions are most effective to prevent 30-day readmissions in medical or surgical patients?
Background: Preventing early readmissions has become a national priority. This study set out to determine which intervention had the largest impact on the prevention of early readmission.
Study design: Meta-analysis.
Setting: Forty-seven studies in multiple locations.
Synopsis: This study evaluated 47 randomized trials that assessed the effectiveness of peri-discharge interventions on the risk of all-cause or unplanned 30-day readmissions for medical and surgical patients. Outcomes included unplanned readmissions, all-cause readmissions, and a composite of unplanned and all-cause readmissions plus out-of-hospital deaths.
The included studies reported up to seven methods of preventing readmissions, including involvement of case management, home visits, education of patients, and self-care support. In 42 trials reporting readmission rates, the pooled relative risk of readmission was 0.82 (95 % CI, 0.73-0.91; P<0.001) within 30 days.
Multiple subgroup analyses noted that the most effective interventions on hospital readmission were those that were more complex and those that sought to augment patient capacity to access and enact dependable post-discharge care.
Limitations included single-center academic studies, lack of standard for dealing with missing data, existence of publication bias, and differing methods used to evaluate intervention effects.
Bottom line: This study was the largest of its kind, to date, and suggests that the interventions analyzed in this study, although complex (e.g. enhancing capacity for self-care at home), were efficacious in reducing 30-day readmissions.
Citation: Leppin AL, Gionfriddo MR, Kessler M, et al. Preventing 30-day hospital readmissions: a systematic review and meta-analysis of randomized trials. JAMA Intern Med. 2014;174(7):1095-1107.
Clinical question: Which interventions are most effective to prevent 30-day readmissions in medical or surgical patients?
Background: Preventing early readmissions has become a national priority. This study set out to determine which intervention had the largest impact on the prevention of early readmission.
Study design: Meta-analysis.
Setting: Forty-seven studies in multiple locations.
Synopsis: This study evaluated 47 randomized trials that assessed the effectiveness of peri-discharge interventions on the risk of all-cause or unplanned 30-day readmissions for medical and surgical patients. Outcomes included unplanned readmissions, all-cause readmissions, and a composite of unplanned and all-cause readmissions plus out-of-hospital deaths.
The included studies reported up to seven methods of preventing readmissions, including involvement of case management, home visits, education of patients, and self-care support. In 42 trials reporting readmission rates, the pooled relative risk of readmission was 0.82 (95 % CI, 0.73-0.91; P<0.001) within 30 days.
Multiple subgroup analyses noted that the most effective interventions on hospital readmission were those that were more complex and those that sought to augment patient capacity to access and enact dependable post-discharge care.
Limitations included single-center academic studies, lack of standard for dealing with missing data, existence of publication bias, and differing methods used to evaluate intervention effects.
Bottom line: This study was the largest of its kind, to date, and suggests that the interventions analyzed in this study, although complex (e.g. enhancing capacity for self-care at home), were efficacious in reducing 30-day readmissions.
Citation: Leppin AL, Gionfriddo MR, Kessler M, et al. Preventing 30-day hospital readmissions: a systematic review and meta-analysis of randomized trials. JAMA Intern Med. 2014;174(7):1095-1107.
New Oral Anticoagulants Increase GI Bleed Risk
Clinical question: Do thrombin and factor Xa inhibitors increase the risk of gastrointestinal (GI) bleeding when compared to vitamin K antagonists and heparins?
Background: New oral anticoagulants (thrombin and factor Xa inhibitors) are available and being used with increased frequency due to equal efficacy and ease of administration. Some studies indicate a higher risk of GI bleeding with these agents. Further evaluation is needed, because no reversal therapy is available.
Study design: Systematic review and meta-analysis.
Setting: Data from MEDLINE, Embase, and the Cochrane Library.
Synopsis: More than 150,000 patients from 43 randomized controlled trials were evaluated for risk of GI bleed when treated with new anticoagulants versus traditional therapy. Patients were treated for one of the following: embolism prevention from atrial fibrillation, venous thromboembolism (VTE) prophylaxis post orthopedic surgery, VTE prophylaxis of medical patients, acute VTE, and acute coronary syndrome (ACS). Use of aspirin or NSAIDs was discouraged but not documented. The odds ratio for GI bleeding with use of the new anticoagulants was 1.45, with a number needed to harm of 500. Evaluation of subgroups revealed increased GI bleed risk in patients treated for ACS and acute thrombosis versus prophylaxis. Postsurgical patients had the lowest risk. This study was limited by the heterogeneity and differing primary outcomes (mostly efficacy rather than safety) of the included trials. Studies excluded high-risk patients, which the authors estimate to be 25%–40% of actual patients. More studies need to be done that include high-risk patients and focus on GI bleed as a primary outcome.
Bottom line: The new anticoagulants tend to have a higher incidence of GI bleed than traditional therapy, but this varies based on indication of therapy and needs further evaluation to clarify risk.
Citation: Holster IL, Valkhoff VE, Kuipers EJ, Tjwa ET. New oral anticoagulants increase risk for gastrointestinal bleeding: A systematic review and meta-analysis. Gastroenterology. 2013;145(1):105–112.
Clinical question: Do thrombin and factor Xa inhibitors increase the risk of gastrointestinal (GI) bleeding when compared to vitamin K antagonists and heparins?
Background: New oral anticoagulants (thrombin and factor Xa inhibitors) are available and being used with increased frequency due to equal efficacy and ease of administration. Some studies indicate a higher risk of GI bleeding with these agents. Further evaluation is needed, because no reversal therapy is available.
Study design: Systematic review and meta-analysis.
Setting: Data from MEDLINE, Embase, and the Cochrane Library.
Synopsis: More than 150,000 patients from 43 randomized controlled trials were evaluated for risk of GI bleed when treated with new anticoagulants versus traditional therapy. Patients were treated for one of the following: embolism prevention from atrial fibrillation, venous thromboembolism (VTE) prophylaxis post orthopedic surgery, VTE prophylaxis of medical patients, acute VTE, and acute coronary syndrome (ACS). Use of aspirin or NSAIDs was discouraged but not documented. The odds ratio for GI bleeding with use of the new anticoagulants was 1.45, with a number needed to harm of 500. Evaluation of subgroups revealed increased GI bleed risk in patients treated for ACS and acute thrombosis versus prophylaxis. Postsurgical patients had the lowest risk. This study was limited by the heterogeneity and differing primary outcomes (mostly efficacy rather than safety) of the included trials. Studies excluded high-risk patients, which the authors estimate to be 25%–40% of actual patients. More studies need to be done that include high-risk patients and focus on GI bleed as a primary outcome.
Bottom line: The new anticoagulants tend to have a higher incidence of GI bleed than traditional therapy, but this varies based on indication of therapy and needs further evaluation to clarify risk.
Citation: Holster IL, Valkhoff VE, Kuipers EJ, Tjwa ET. New oral anticoagulants increase risk for gastrointestinal bleeding: A systematic review and meta-analysis. Gastroenterology. 2013;145(1):105–112.
Clinical question: Do thrombin and factor Xa inhibitors increase the risk of gastrointestinal (GI) bleeding when compared to vitamin K antagonists and heparins?
Background: New oral anticoagulants (thrombin and factor Xa inhibitors) are available and being used with increased frequency due to equal efficacy and ease of administration. Some studies indicate a higher risk of GI bleeding with these agents. Further evaluation is needed, because no reversal therapy is available.
Study design: Systematic review and meta-analysis.
Setting: Data from MEDLINE, Embase, and the Cochrane Library.
Synopsis: More than 150,000 patients from 43 randomized controlled trials were evaluated for risk of GI bleed when treated with new anticoagulants versus traditional therapy. Patients were treated for one of the following: embolism prevention from atrial fibrillation, venous thromboembolism (VTE) prophylaxis post orthopedic surgery, VTE prophylaxis of medical patients, acute VTE, and acute coronary syndrome (ACS). Use of aspirin or NSAIDs was discouraged but not documented. The odds ratio for GI bleeding with use of the new anticoagulants was 1.45, with a number needed to harm of 500. Evaluation of subgroups revealed increased GI bleed risk in patients treated for ACS and acute thrombosis versus prophylaxis. Postsurgical patients had the lowest risk. This study was limited by the heterogeneity and differing primary outcomes (mostly efficacy rather than safety) of the included trials. Studies excluded high-risk patients, which the authors estimate to be 25%–40% of actual patients. More studies need to be done that include high-risk patients and focus on GI bleed as a primary outcome.
Bottom line: The new anticoagulants tend to have a higher incidence of GI bleed than traditional therapy, but this varies based on indication of therapy and needs further evaluation to clarify risk.
Citation: Holster IL, Valkhoff VE, Kuipers EJ, Tjwa ET. New oral anticoagulants increase risk for gastrointestinal bleeding: A systematic review and meta-analysis. Gastroenterology. 2013;145(1):105–112.
ICU Pressure Improves Patient Transfers to the Hospital Floor
Clinical question: Does ICU strain negatively affect the outcomes of patients transferred to the floor?
Background: With healthcare costs increasing and critical care staff shortages projected, ICUs will have to operate under increasing strain. This may influence decisions on discharging patients from ICUs and could affect patient outcomes.
Study design: Retrospective cohort study.
Setting: One hundred fifty-five ICUs in the United States.
Synopsis: Using the Project IMPACT database, 200,730 adult patients from 107 different hospitals were evaluated in times of ICU strain, determined by the current census, new admissions, and acuity level. Outcomes measured were initial ICU length of stay (LOS), readmission within 72 hours, in-hospital mortality rates, and post-ICU discharge LOS.
Increases of the strain variables from the fifth to the 95th percentiles resulted in a 6.3-hour reduction in ICU LOS, a 2.0-hour decrease in post-ICU discharge LOS, and a 1.0% increase in probability of ICU readmission within 72 hours. Mortality rates during the hospital stay and odds of being discharged home showed no significant change. This study was limited because the ICUs participating were not randomly chosen, outcomes of patients transferred to other hospitals were not measured, and no post-hospital data was collected, so no long-term outcomes could be measured.
Bottom line: ICU bed pressures prompt physicians to allocate ICU resources more efficiently without changing short-term patient outcomes.
Citation: Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455.
Clinical question: Does ICU strain negatively affect the outcomes of patients transferred to the floor?
Background: With healthcare costs increasing and critical care staff shortages projected, ICUs will have to operate under increasing strain. This may influence decisions on discharging patients from ICUs and could affect patient outcomes.
Study design: Retrospective cohort study.
Setting: One hundred fifty-five ICUs in the United States.
Synopsis: Using the Project IMPACT database, 200,730 adult patients from 107 different hospitals were evaluated in times of ICU strain, determined by the current census, new admissions, and acuity level. Outcomes measured were initial ICU length of stay (LOS), readmission within 72 hours, in-hospital mortality rates, and post-ICU discharge LOS.
Increases of the strain variables from the fifth to the 95th percentiles resulted in a 6.3-hour reduction in ICU LOS, a 2.0-hour decrease in post-ICU discharge LOS, and a 1.0% increase in probability of ICU readmission within 72 hours. Mortality rates during the hospital stay and odds of being discharged home showed no significant change. This study was limited because the ICUs participating were not randomly chosen, outcomes of patients transferred to other hospitals were not measured, and no post-hospital data was collected, so no long-term outcomes could be measured.
Bottom line: ICU bed pressures prompt physicians to allocate ICU resources more efficiently without changing short-term patient outcomes.
Citation: Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455.
Clinical question: Does ICU strain negatively affect the outcomes of patients transferred to the floor?
Background: With healthcare costs increasing and critical care staff shortages projected, ICUs will have to operate under increasing strain. This may influence decisions on discharging patients from ICUs and could affect patient outcomes.
Study design: Retrospective cohort study.
Setting: One hundred fifty-five ICUs in the United States.
Synopsis: Using the Project IMPACT database, 200,730 adult patients from 107 different hospitals were evaluated in times of ICU strain, determined by the current census, new admissions, and acuity level. Outcomes measured were initial ICU length of stay (LOS), readmission within 72 hours, in-hospital mortality rates, and post-ICU discharge LOS.
Increases of the strain variables from the fifth to the 95th percentiles resulted in a 6.3-hour reduction in ICU LOS, a 2.0-hour decrease in post-ICU discharge LOS, and a 1.0% increase in probability of ICU readmission within 72 hours. Mortality rates during the hospital stay and odds of being discharged home showed no significant change. This study was limited because the ICUs participating were not randomly chosen, outcomes of patients transferred to other hospitals were not measured, and no post-hospital data was collected, so no long-term outcomes could be measured.
Bottom line: ICU bed pressures prompt physicians to allocate ICU resources more efficiently without changing short-term patient outcomes.
Citation: Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455.
Hospitalist Reviews on Treatments for Acute Asthma, Stroke, Healthcare-Associated Pneumonia, and More
In This Edition
Literature At A Glance
A guide to this month’s studies
- ICU pressures improve transfers to the floor
- Morbidity, mortality rates high for respiratory syncytial virus infections
- Antibiotic algorithm can guide therapy in healthcare-associated pneumonia
- Three-month dual antiplatelet therapy for zotarolimus-eluting stents
- De-escalating antibiotics in sepsis
- New oral anticoagulants increase GI bleed risk
- Single vs. dual antiplatelet therapy after stroke
- Endoscopic vs. surgical cystogastrostomy for pancreatic pseudocyst drainage
- Long-term cognitive impairment after critical illness
- Holding chambers vs. nebulizers for acute asthma
ICU Pressures Improve Transfers to the Floor
Clinical question: Does ICU strain negatively affect the outcomes of patients transferred to the floor?
Background: With healthcare costs increasing and critical care staff shortages projected, ICUs will have to operate under increasing strain. This may influence decisions on discharging patients from ICUs and could affect patient outcomes.
Study design: Retrospective cohort study.
Setting: One hundred fifty-five ICUs in the United States.
Synopsis: Using the Project IMPACT database, 200,730 adult patients from 107 different hospitals were evaluated in times of ICU strain, determined by the current census, new admissions, and acuity level. Outcomes measured were initial ICU length of stay (LOS), readmission within 72 hours, in-hospital mortality rates, and post-ICU discharge LOS.
Increases of the strain variables from the fifth to the 95th percentiles resulted in a 6.3-hour reduction in ICU LOS, a 2.0-hour decrease in post-ICU discharge LOS, and a 1.0% increase in probability of ICU readmission within 72 hours. Mortality rates during the hospital stay and odds of being discharged home showed no significant change. This study was limited because the ICUs participating were not randomly chosen, outcomes of patients transferred to other hospitals were not measured, and no post-hospital data was collected, so no long-term outcomes could be measured.
Bottom line: ICU bed pressures prompt physicians to allocate ICU resources more efficiently without changing short-term patient outcomes.
Citation: Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455.
Adults Hospitalized for Respiratory Syncytial Virus Infections Have High Morbidity, Mortality Rates
Clinical question: What are the complications and outcomes of respiratory syncytial virus (RSV) infection in adults requiring hospitalization?
Background: RSV is a common cause of lower respiratory tract infection in infants and young children, leading to hospitalization and even death. RSV has been estimated to affect 3%-10% of adults annually, generally causing mild disease. However, the outcomes of adults with more severe disease are not fully known.
Study design: Retrospective cohort study.
Setting: Three acute care, public hospitals in Hong Kong.
Synopsis: All adult patients hospitalized with laboratory-confirmed RSV infection were included during the defined time period. The main outcome measure was all-cause death, with secondary outcome measures of development of acute respiratory failure requiring ventilator support and total duration of hospitalization among survivors. Additionally, the cohort of RSV patients was compared to patients admitted with seasonal influenza during this same time frame. Patients with pandemic 2009 H1N1 infection were not included.
Of patients with RSV, pneumonia was found in 42.3%, bacterial superinfection in 12.5%, and cardiovascular complications in 14.3%. Additionally, 11.1% developed respiratory failure requiring ventilator support. All-cause mortality at 30 days and 60 days was 9.1% and 11.9%, respectively, with pneumonia the most common cause of death. Use of systemic corticosteroids did not improve survival. When the RSV cohort was compared to the influenza cohort, the patients were similar in age, but the RSV patients were more likely to have underlying chronic lung disease and major systemic co-morbidities. The rate of survival and duration of hospitalization were not significantly different.
Bottom line: RSV infection is an underappreciated cause of lower tract respiratory infection in adults; severe infections that require hospitalization have rates of mortality similar to seasonal influenza. Further research on treatment or immunization is needed.
Citation: Lee N, Lui GC, Wong KT, et al. High morbidity and mortality in adults hospitalized for respiratory syncytial virus infections. Clin Infect Dis. 2013;57(8):1069-1077.
Antibiotic Algorithm Can Guide Therapy in Healthcare-Associated Pneumonia
Clinical question: Can an algorithm based on risk for multidrug-resistant (MDR) organisms and illness severity guide antibiotic selection in healthcare-associated pneumonia (HCAP)?
Background: The 2005 American Thoracic Society/Infectious Diseases Society of America (ATS/IDSA) guidelines identify patients with HCAP as those with recent contact with a healthcare environment, including nursing homes and hemodialysis; however, previous studies have shown that not all patients with healthcare contact have equal risk for MDR organisms.
Study design: Prospective cohort study.
Setting: Japan, multi-center.
Synopsis: Of the 445 enrolled patients, 124 were diagnosed with community-acquired pneumonia (CAP) and 321 with HCAP. Patients with HCAP were classified based on severity of illness or MDR pathogen risk factors (immune suppression, hospitalization within the last 90 days, poor functional status, and antibiotics within the past six months). Patients with low risk (0-1 factors) for MDR organisms were treated for CAP, and patients with high risk (≥2 factors) or moderate risk (≥1 factor) for severe illness were treated for HCAP.
HCAP patients had a higher 30-day mortality rate (13.7% vs. 5.6%, P=0.017), but mortality rate was less in the patients at low risk for MDR pathogens (8.6% vs. 18.2%, P=0.012). Of the HCAP patients, only 7.1% received inappropriate therapy (pathogen resistant to initial antibiotic regimen), and treatment failure was 19.3%.
Appropriateness of initial empiric therapy was determined not to be a mortality risk; however, this trial might be limited by its location, because Japan appears to have fewer MDR pathogens than the U.S.
Bottom line: A treatment algorithm based on risk for MDR organisms and severity of illness can be used to guide empiric antibiotic therapy in patients with HCAP, and, ideally, to reduce excessive use of broad-spectrum antibiotics.
Citation: Maruyama T, Fujisawa T, Okuno M, et al. A new strategy for healthcare-associated pneumonia: a 2-year prospective multicenter cohort study using risk factors for multidrug-resistant pathogens to select initial empiric therapy. Clin Infect Dis. 2013;57(10):1373-1383.
Three-Month Dual Antiplatelet Therapy for Zotarolimus-Eluting Stents
Clinical question: Is short-term, dual antiplatelet therapy noninferior to long-term therapy in zotarolimus-eluting stents?
Background: Current guidelines recommend long-term (>12 months) dual antiplatelet therapy after the placement of drug-eluting stents. The optimal therapy duration in second-generation drug-eluting stents has not been studied; moreover, some studies with multiple drug-eluting stents have suggested no added benefit from long-term therapy.
Study design: Randomized controlled trial.
Setting: Brazil, multi-center.
Synopsis: Researchers randomized 3,211 patients with stable coronary artery disease (CAD) or low-risk acute coronary syndrome (ACS) undergoing intervention with zotarolimus-eluting stents to short-term (three months) or long-term (12 months) dual antiplatelet therapy. Exclusion criteria included ST-elevation myocardial infarction (STEMI), previous drug-eluting stent, scheduled elective surgery within 12 months, or contraindication to aspirin or clopidogrel. Primary endpoints were a composite of death from any cause, MI, stroke, or major bleeding. Secondary endpoints were stent thrombosis, target lesion revascularization, adverse cardiac event, and any bleed.
At one-year follow-up, the short-term group had similar primary (6.0% vs. 5.8%) and secondary (8.3% vs. 7.4%) outcomes compared to the long-term. The short-term group’s noninferiority also was seen in several key subgroups.
This study included patients with stable CAD or low-risk ACS and cannot be generalized to higher-risk patients. Results for zotarolimus-eluting stents cannot be generalized to other second-generation drug-eluting stents.
Bottom line: Zotarolimus-eluting stents, followed by three months of dual antiplatelet therapy, were noninferior to 12 months of therapy in patients with stable CAD or low-risk ACS.
Citation: Feres F, Costa RA, Abizaid A, et al. Three vs. twelve months of dual antiplatelet therapy after zotarolimus-eluting stents: the OPTIMIZE randomized trial. JAMA. 2013;310(23):2510-2522.
De-Escalating Antibiotics in Sepsis
Clinical question: Does tailoring antibiotics based on known pathogens impact mortality for patients with severe sepsis or shock?
Background: In patients with sepsis, the use of early empiric antibiotics reduces morbidity and mortality. De-escalation therapy refers to narrowing the broad-spectrum antibiotics once the pathogen and sensitivities are known; however, no randomized controlled studies have assessed the impact of this therapy on critically ill patients.
Study design: Prospective observational study.
Setting: Academic hospital ICU in Spain.
Synopsis: From January 2008 to May 2012, 628 adult patients were treated empirically with broad-spectrum antibiotics. De-escalation was applied to 219 patients (34.9%). Outcomes measured were ICU mortality, hospital mortality, and 90-day mortality in patients who received de-escalation therapy, patients whose antibiotics were not changed, and patients for whom antibiotics were escalated.
The in-hospital mortality rate was 27.4% in patients who were de-escalated, 32.6% in the unchanged group, and 42.9% in the escalation group. ICU and 90-day mortality were lower in the de-escalation group. De-escalation was more commonly used in medical than in surgical patients.
This study is limited because it is not a randomized controlled study and was single-centered, so it might only be applicable on the larger scale. Also, multi-drug resistant organisms were not evaluated.
Overall, it is safe to narrow empiric antibiotics in severe sepsis and shock when the pathogen and sensitivities are known.
Bottom line: De-escalation of antibiotics in severe sepsis and septic shock is associated with a lower mortality.
Citation: Garnacho-Montero J, Gutierrez-Pizarraya A, Escoresca-Ortega A, et al. De-escalation of empirical therapy is associated with lower mortality in patients with severe sepsis and septic shock. Intensive Care Med. 2014;40(1):32-40.
New Oral Anticoagulants Increase GI Bleed Risk
Clinical question: Do thrombin and factor Xa inhibitors increase the risk of gastrointestinal (GI) bleeding when compared to vitamin K antagonists and heparins?
Background: New oral anticoagulants (thrombin and factor Xa inhibitors) are available and being used with increased frequency due to equal efficacy and ease of administration. Some studies indicate a higher risk of GI bleeding with these agents. Further evaluation is needed, because no reversal therapy is available.
Study design: Systematic review and meta-analysis.
Setting: Data from MEDLINE, Embase, and the Cochrane Library.
Synopsis: More than 150,000 patients from 43 randomized controlled trials were evaluated for risk of GI bleed when treated with new anticoagulants versus traditional therapy. Patients were treated for one of the following: embolism prevention from atrial fibrillation, venous thromboembolism (VTE) prophylaxis post orthopedic surgery, VTE prophylaxis of medical patients, acute VTE, and acute coronary syndrome (ACS). Use of aspirin or NSAIDs was discouraged but not documented. The odds ratio for GI bleeding with use of the new anticoagulants was 1.45, with a number needed to harm of 500. Evaluation of subgroups revealed increased GI bleed risk in patients treated for ACS and acute thrombosis versus prophylaxis. Post-surgical patients had the lowest risk.
This study was limited by the heterogeneity and differing primary outcomes (mostly efficacy rather than safety) of the included trials. Studies excluded high-risk patients, which the authors estimate to be 25%-40% of actual patients. More studies need to be done that include high-risk patients and focus on GI bleed as a primary outcome.
Bottom line: The new anticoagulants tend to have a higher incidence of GI bleed than traditional therapy, but this varies based on indication of therapy and needs further evaluation to clarify risk.
Citation: Holster IL, Valkhoff VE, Kuipers EJ, Tjwa ET. New oral anticoagulants increase risk for gastrointestinal bleeding: a systematic review and meta-analysis. Gastroenterology. 2013;145(1):105-112.
Single vs. Dual Antiplatelet Therapy after Stroke
Clinical question: Is dual antiplatelet therapy more beneficial or harmful than monotherapy after ischemic stroke?
Background: It is recommended that patients with ischemic stroke or transient ischemic attack (TIA) receive lifelong antiplatelet therapy; however, there have been insufficient studies evaluating the long-term safety of dual antiplatelet therapy.
Study design: Meta-analysis of randomized controlled trials (RCTs)
Setting: Data from PubMed, Embase, and the Cochrane Central Register of Controlled Trials.
Synopsis: Data from seven RCTs, including 39,574 patients with recent TIA or ischemic stroke, were reviewed. Comparisons were made regarding occurrence of intracranial hemorrhage (ICH) and recurrent stroke between patients receiving dual antiplatelet therapy and those receiving aspirin or clopidogrel monotherapy. All patients were treated for at least one year.
There was no difference in recurrent stroke or ICH between patients on dual antiplatelet therapy versus aspirin monotherapy. Patients treated with dual antiplatelet therapy did have a 46% increased risk of ICH without any additional protective benefit for recurrent stroke or TIA when compared with patients on clopidogrel monotherapy.
This information should not be applied in the acute setting, given the high risk of stroke after TIA or ischemic stroke. One major limitation of this study was that the individual trials used different combinations of dual antiplatelet therapy.
Bottom line: The risk of recurrent stroke or TIA after dual antiplatelet therapy and after monotherapy with aspirin or clopidogrel is equal, but the risk of ICH compared to clopidogrel monotherapy is increased.
Citation: Lee M, Saver JL, Hong KS, Rao NM, Wu YL, Ovbiagele B. Risk-benefit profile of long-term dual- versus single-antiplatelet therapy among patients with ischemic stroke: a systematic review and meta-analysis. Ann Intern Med. 2013;159(7):463-470.
Endoscopic vs. Surgical Cystogastrostomy for Pancreatic Pseudocyst Drainage
Clinical question: How does endoscopic cystogastrostomy for pancreatic pseudocyst drainage compare to the standard surgical approach?
Background: Pancreatic pseudocysts are a common complication of pancreatitis and necessitate decompression when they are accompanied by pain, infection, or obstruction. Decompression of the pseudocyst can be accomplished using either endoscopic or surgical cystogastrostomy.
Study design: Open-label, single-center, randomized trial.
Setting: Single-center U.S. hospital.
Synopsis: A total of 40 patients were randomly equalized to both treatment arms; 20 patients underwent endoscopic and 20 patients underwent surgical cystogastrostomy. Zero patients in the endoscopic therapy had a pseudocyst recurrence, compared with one patient treated surgically. Length of stay (LOS) and cost were lower for the endoscopic group compared to the surgical group (two days vs. six days, P<0.001, $7,011 vs. $15,052, P=0.003).
This study is limited due to several factors. First, patients with pancreatic necrosis were excluded; had these patients been included, the complication rates and LOS would have been higher. Second, cost difference cannot be generalized across the U.S., because Medicare payments are based on provider types and regions.
Bottom line: Endoscopic cystogastrostomy for pancreatic pseudocyst is equal to the standard surgical therapy and results in decreased LOS and reduced costs.
Citation: Varadarajulu S, Bang JY, Sutton BS, Trevino JM, Christein JD, Wilcox CM. Equal efficacy of endoscopic and surgical cystogastrostomy for pancreatic pseudocyst drainage in a randomized trial. Gastroenterology. 2013;145(3):583-590.
Long-Term Cognitive Impairment after Critical Illness
Clinical question: Are a longer duration of delirium and higher doses of sedatives associated with cognitive impairment in the hospital?
Background: Survivors of critical illness are at risk for prolonged cognitive dysfunction. Delirium (and factors associated with delirium, namely sedative and analgesic medications) has been implicated in cognitive dysfunction.
Study design: Prospective cohort study.
Setting: Multi-center, academic, and acute care hospitals.
Synopsis: The study examined 821 adults admitted to the ICU with respiratory failure, cardiogenic shock, or septic shock. Patients excluded were those with pre-existing cognitive impairment, those with psychotic disorders, and those for whom follow-up would not be possible. Two risk factors measured were duration of delirium and use of sedative/analgesics. Delirium was assessed at three and 12 months using the CAM-ICU algorithm in the ICU by trained psychology professionals who were unaware of the patients’ in-hospital course.
At three months, 40% of patients had global cognition scores that were 1.5 standard deviations (SD) below population mean (similar to traumatic brain injury), and 26% had scores two SD below population mean (similar to mild Alzheimer’s). At 12 months, 34% had scores similar to traumatic brain injury patients, and 24% had scores similar to mild Alzheimer’s. A longer duration of delirium was associated with worse global cognition at three and 12 months. Use of sedatives/analgesics was not associated with cognitive impairment.
Bottom line: Critically ill patients in the ICU who experience a longer duration of delirium are at risk of long-term cognitive impairments lasting 12 months.
Citation: Pandharipande PP, Girard TD, Jackson JC, et al. Long-term cognitive impairment after critical illness. N Engl J Med. 2013;369(14):1306-1316.
Holding Chambers (Spacers) vs. Nebulizers for Acute Asthma
Clinical question: Are beta-2 agonists as effective when administered through a holding chamber (spacer) as they are when administered by a nebulizer?
Background: During an acute asthma attack, beta-2 agonists must be delivered to the peripheral airways. There has been considerable controversy regarding the use of a spacer compared with a nebulizer. Aside from admission rates and length of stay, factors taken into account include cost, maintenance of nebulizer machines, and infection control (potential of cross-infection via nebulizers).
Study design: Meta-analysis review of randomized controlled trials (RCTs).
Setting: Multi-centered, worldwide studies from community setting and EDs.
Synopsis: In 39 studies of patients with an acute asthma attack (selected from Cochrane Airways Group Specialized Register), the hospital admission rates did not differ on the basis of delivery method in 729 adults (risk ratio=0.94, confidence interval 0.61-1.43) or in 1,897 children (risk ratio=0.71, confidence interval 0.47-1.08). Secondary outcomes included the duration of time in the ED and the duration of hospital admission. Time spent in the ED varied for adults but was shorter for children with spacers (based on three studies). Duration of hospital admission also did not differ when modes of delivery were compared.
Bottom line: Providing beta-2 agonists using nebulizers during an acute asthma attack is not more effective than administration using a spacer.
Citation: Cates CJ, Welsh EJ, Rowe BH. Holding chambers (spacers) versus nebulisers for beta-agonist treatment of acute asthma. Cochrane Database Syst Rev. 2013;9:CD000052.
In This Edition
Literature At A Glance
A guide to this month’s studies
- ICU pressures improve transfers to the floor
- Morbidity, mortality rates high for respiratory syncytial virus infections
- Antibiotic algorithm can guide therapy in healthcare-associated pneumonia
- Three-month dual antiplatelet therapy for zotarolimus-eluting stents
- De-escalating antibiotics in sepsis
- New oral anticoagulants increase GI bleed risk
- Single vs. dual antiplatelet therapy after stroke
- Endoscopic vs. surgical cystogastrostomy for pancreatic pseudocyst drainage
- Long-term cognitive impairment after critical illness
- Holding chambers vs. nebulizers for acute asthma
ICU Pressures Improve Transfers to the Floor
Clinical question: Does ICU strain negatively affect the outcomes of patients transferred to the floor?
Background: With healthcare costs increasing and critical care staff shortages projected, ICUs will have to operate under increasing strain. This may influence decisions on discharging patients from ICUs and could affect patient outcomes.
Study design: Retrospective cohort study.
Setting: One hundred fifty-five ICUs in the United States.
Synopsis: Using the Project IMPACT database, 200,730 adult patients from 107 different hospitals were evaluated in times of ICU strain, determined by the current census, new admissions, and acuity level. Outcomes measured were initial ICU length of stay (LOS), readmission within 72 hours, in-hospital mortality rates, and post-ICU discharge LOS.
Increases of the strain variables from the fifth to the 95th percentiles resulted in a 6.3-hour reduction in ICU LOS, a 2.0-hour decrease in post-ICU discharge LOS, and a 1.0% increase in probability of ICU readmission within 72 hours. Mortality rates during the hospital stay and odds of being discharged home showed no significant change. This study was limited because the ICUs participating were not randomly chosen, outcomes of patients transferred to other hospitals were not measured, and no post-hospital data was collected, so no long-term outcomes could be measured.
Bottom line: ICU bed pressures prompt physicians to allocate ICU resources more efficiently without changing short-term patient outcomes.
Citation: Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455.
Adults Hospitalized for Respiratory Syncytial Virus Infections Have High Morbidity, Mortality Rates
Clinical question: What are the complications and outcomes of respiratory syncytial virus (RSV) infection in adults requiring hospitalization?
Background: RSV is a common cause of lower respiratory tract infection in infants and young children, leading to hospitalization and even death. RSV has been estimated to affect 3%-10% of adults annually, generally causing mild disease. However, the outcomes of adults with more severe disease are not fully known.
Study design: Retrospective cohort study.
Setting: Three acute care, public hospitals in Hong Kong.
Synopsis: All adult patients hospitalized with laboratory-confirmed RSV infection were included during the defined time period. The main outcome measure was all-cause death, with secondary outcome measures of development of acute respiratory failure requiring ventilator support and total duration of hospitalization among survivors. Additionally, the cohort of RSV patients was compared to patients admitted with seasonal influenza during this same time frame. Patients with pandemic 2009 H1N1 infection were not included.
Of patients with RSV, pneumonia was found in 42.3%, bacterial superinfection in 12.5%, and cardiovascular complications in 14.3%. Additionally, 11.1% developed respiratory failure requiring ventilator support. All-cause mortality at 30 days and 60 days was 9.1% and 11.9%, respectively, with pneumonia the most common cause of death. Use of systemic corticosteroids did not improve survival. When the RSV cohort was compared to the influenza cohort, the patients were similar in age, but the RSV patients were more likely to have underlying chronic lung disease and major systemic co-morbidities. The rate of survival and duration of hospitalization were not significantly different.
Bottom line: RSV infection is an underappreciated cause of lower tract respiratory infection in adults; severe infections that require hospitalization have rates of mortality similar to seasonal influenza. Further research on treatment or immunization is needed.
Citation: Lee N, Lui GC, Wong KT, et al. High morbidity and mortality in adults hospitalized for respiratory syncytial virus infections. Clin Infect Dis. 2013;57(8):1069-1077.
Antibiotic Algorithm Can Guide Therapy in Healthcare-Associated Pneumonia
Clinical question: Can an algorithm based on risk for multidrug-resistant (MDR) organisms and illness severity guide antibiotic selection in healthcare-associated pneumonia (HCAP)?
Background: The 2005 American Thoracic Society/Infectious Diseases Society of America (ATS/IDSA) guidelines identify patients with HCAP as those with recent contact with a healthcare environment, including nursing homes and hemodialysis; however, previous studies have shown that not all patients with healthcare contact have equal risk for MDR organisms.
Study design: Prospective cohort study.
Setting: Japan, multi-center.
Synopsis: Of the 445 enrolled patients, 124 were diagnosed with community-acquired pneumonia (CAP) and 321 with HCAP. Patients with HCAP were classified based on severity of illness or MDR pathogen risk factors (immune suppression, hospitalization within the last 90 days, poor functional status, and antibiotics within the past six months). Patients with low risk (0-1 factors) for MDR organisms were treated for CAP, and patients with high risk (≥2 factors) or moderate risk (≥1 factor) for severe illness were treated for HCAP.
HCAP patients had a higher 30-day mortality rate (13.7% vs. 5.6%, P=0.017), but mortality rate was less in the patients at low risk for MDR pathogens (8.6% vs. 18.2%, P=0.012). Of the HCAP patients, only 7.1% received inappropriate therapy (pathogen resistant to initial antibiotic regimen), and treatment failure was 19.3%.
Appropriateness of initial empiric therapy was determined not to be a mortality risk; however, this trial might be limited by its location, because Japan appears to have fewer MDR pathogens than the U.S.
Bottom line: A treatment algorithm based on risk for MDR organisms and severity of illness can be used to guide empiric antibiotic therapy in patients with HCAP, and, ideally, to reduce excessive use of broad-spectrum antibiotics.
Citation: Maruyama T, Fujisawa T, Okuno M, et al. A new strategy for healthcare-associated pneumonia: a 2-year prospective multicenter cohort study using risk factors for multidrug-resistant pathogens to select initial empiric therapy. Clin Infect Dis. 2013;57(10):1373-1383.
Three-Month Dual Antiplatelet Therapy for Zotarolimus-Eluting Stents
Clinical question: Is short-term, dual antiplatelet therapy noninferior to long-term therapy in zotarolimus-eluting stents?
Background: Current guidelines recommend long-term (>12 months) dual antiplatelet therapy after the placement of drug-eluting stents. The optimal therapy duration in second-generation drug-eluting stents has not been studied; moreover, some studies with multiple drug-eluting stents have suggested no added benefit from long-term therapy.
Study design: Randomized controlled trial.
Setting: Brazil, multi-center.
Synopsis: Researchers randomized 3,211 patients with stable coronary artery disease (CAD) or low-risk acute coronary syndrome (ACS) undergoing intervention with zotarolimus-eluting stents to short-term (three months) or long-term (12 months) dual antiplatelet therapy. Exclusion criteria included ST-elevation myocardial infarction (STEMI), previous drug-eluting stent, scheduled elective surgery within 12 months, or contraindication to aspirin or clopidogrel. Primary endpoints were a composite of death from any cause, MI, stroke, or major bleeding. Secondary endpoints were stent thrombosis, target lesion revascularization, adverse cardiac event, and any bleed.
At one-year follow-up, the short-term group had similar primary (6.0% vs. 5.8%) and secondary (8.3% vs. 7.4%) outcomes compared to the long-term. The short-term group’s noninferiority also was seen in several key subgroups.
This study included patients with stable CAD or low-risk ACS and cannot be generalized to higher-risk patients. Results for zotarolimus-eluting stents cannot be generalized to other second-generation drug-eluting stents.
Bottom line: Zotarolimus-eluting stents, followed by three months of dual antiplatelet therapy, were noninferior to 12 months of therapy in patients with stable CAD or low-risk ACS.
Citation: Feres F, Costa RA, Abizaid A, et al. Three vs. twelve months of dual antiplatelet therapy after zotarolimus-eluting stents: the OPTIMIZE randomized trial. JAMA. 2013;310(23):2510-2522.
De-Escalating Antibiotics in Sepsis
Clinical question: Does tailoring antibiotics based on known pathogens impact mortality for patients with severe sepsis or shock?
Background: In patients with sepsis, the use of early empiric antibiotics reduces morbidity and mortality. De-escalation therapy refers to narrowing the broad-spectrum antibiotics once the pathogen and sensitivities are known; however, no randomized controlled studies have assessed the impact of this therapy on critically ill patients.
Study design: Prospective observational study.
Setting: Academic hospital ICU in Spain.
Synopsis: From January 2008 to May 2012, 628 adult patients were treated empirically with broad-spectrum antibiotics. De-escalation was applied to 219 patients (34.9%). Outcomes measured were ICU mortality, hospital mortality, and 90-day mortality in patients who received de-escalation therapy, patients whose antibiotics were not changed, and patients for whom antibiotics were escalated.
The in-hospital mortality rate was 27.4% in patients who were de-escalated, 32.6% in the unchanged group, and 42.9% in the escalation group. ICU and 90-day mortality were lower in the de-escalation group. De-escalation was more commonly used in medical than in surgical patients.
This study is limited because it is not a randomized controlled study and was single-centered, so it might only be applicable on the larger scale. Also, multi-drug resistant organisms were not evaluated.
Overall, it is safe to narrow empiric antibiotics in severe sepsis and shock when the pathogen and sensitivities are known.
Bottom line: De-escalation of antibiotics in severe sepsis and septic shock is associated with a lower mortality.
Citation: Garnacho-Montero J, Gutierrez-Pizarraya A, Escoresca-Ortega A, et al. De-escalation of empirical therapy is associated with lower mortality in patients with severe sepsis and septic shock. Intensive Care Med. 2014;40(1):32-40.
New Oral Anticoagulants Increase GI Bleed Risk
Clinical question: Do thrombin and factor Xa inhibitors increase the risk of gastrointestinal (GI) bleeding when compared to vitamin K antagonists and heparins?
Background: New oral anticoagulants (thrombin and factor Xa inhibitors) are available and being used with increased frequency due to equal efficacy and ease of administration. Some studies indicate a higher risk of GI bleeding with these agents. Further evaluation is needed, because no reversal therapy is available.
Study design: Systematic review and meta-analysis.
Setting: Data from MEDLINE, Embase, and the Cochrane Library.
Synopsis: More than 150,000 patients from 43 randomized controlled trials were evaluated for risk of GI bleed when treated with new anticoagulants versus traditional therapy. Patients were treated for one of the following: embolism prevention from atrial fibrillation, venous thromboembolism (VTE) prophylaxis post orthopedic surgery, VTE prophylaxis of medical patients, acute VTE, and acute coronary syndrome (ACS). Use of aspirin or NSAIDs was discouraged but not documented. The odds ratio for GI bleeding with use of the new anticoagulants was 1.45, with a number needed to harm of 500. Evaluation of subgroups revealed increased GI bleed risk in patients treated for ACS and acute thrombosis versus prophylaxis. Post-surgical patients had the lowest risk.
This study was limited by the heterogeneity and differing primary outcomes (mostly efficacy rather than safety) of the included trials. Studies excluded high-risk patients, which the authors estimate to be 25%-40% of actual patients. More studies need to be done that include high-risk patients and focus on GI bleed as a primary outcome.
Bottom line: The new anticoagulants tend to have a higher incidence of GI bleed than traditional therapy, but this varies based on indication of therapy and needs further evaluation to clarify risk.
Citation: Holster IL, Valkhoff VE, Kuipers EJ, Tjwa ET. New oral anticoagulants increase risk for gastrointestinal bleeding: a systematic review and meta-analysis. Gastroenterology. 2013;145(1):105-112.
Single vs. Dual Antiplatelet Therapy after Stroke
Clinical question: Is dual antiplatelet therapy more beneficial or harmful than monotherapy after ischemic stroke?
Background: It is recommended that patients with ischemic stroke or transient ischemic attack (TIA) receive lifelong antiplatelet therapy; however, there have been insufficient studies evaluating the long-term safety of dual antiplatelet therapy.
Study design: Meta-analysis of randomized controlled trials (RCTs)
Setting: Data from PubMed, Embase, and the Cochrane Central Register of Controlled Trials.
Synopsis: Data from seven RCTs, including 39,574 patients with recent TIA or ischemic stroke, were reviewed. Comparisons were made regarding occurrence of intracranial hemorrhage (ICH) and recurrent stroke between patients receiving dual antiplatelet therapy and those receiving aspirin or clopidogrel monotherapy. All patients were treated for at least one year.
There was no difference in recurrent stroke or ICH between patients on dual antiplatelet therapy versus aspirin monotherapy. Patients treated with dual antiplatelet therapy did have a 46% increased risk of ICH without any additional protective benefit for recurrent stroke or TIA when compared with patients on clopidogrel monotherapy.
This information should not be applied in the acute setting, given the high risk of stroke after TIA or ischemic stroke. One major limitation of this study was that the individual trials used different combinations of dual antiplatelet therapy.
Bottom line: The risk of recurrent stroke or TIA after dual antiplatelet therapy and after monotherapy with aspirin or clopidogrel is equal, but the risk of ICH compared to clopidogrel monotherapy is increased.
Citation: Lee M, Saver JL, Hong KS, Rao NM, Wu YL, Ovbiagele B. Risk-benefit profile of long-term dual- versus single-antiplatelet therapy among patients with ischemic stroke: a systematic review and meta-analysis. Ann Intern Med. 2013;159(7):463-470.
Endoscopic vs. Surgical Cystogastrostomy for Pancreatic Pseudocyst Drainage
Clinical question: How does endoscopic cystogastrostomy for pancreatic pseudocyst drainage compare to the standard surgical approach?
Background: Pancreatic pseudocysts are a common complication of pancreatitis and necessitate decompression when they are accompanied by pain, infection, or obstruction. Decompression of the pseudocyst can be accomplished using either endoscopic or surgical cystogastrostomy.
Study design: Open-label, single-center, randomized trial.
Setting: Single-center U.S. hospital.
Synopsis: A total of 40 patients were randomly equalized to both treatment arms; 20 patients underwent endoscopic and 20 patients underwent surgical cystogastrostomy. Zero patients in the endoscopic therapy had a pseudocyst recurrence, compared with one patient treated surgically. Length of stay (LOS) and cost were lower for the endoscopic group compared to the surgical group (two days vs. six days, P<0.001, $7,011 vs. $15,052, P=0.003).
This study is limited due to several factors. First, patients with pancreatic necrosis were excluded; had these patients been included, the complication rates and LOS would have been higher. Second, cost difference cannot be generalized across the U.S., because Medicare payments are based on provider types and regions.
Bottom line: Endoscopic cystogastrostomy for pancreatic pseudocyst is equal to the standard surgical therapy and results in decreased LOS and reduced costs.
Citation: Varadarajulu S, Bang JY, Sutton BS, Trevino JM, Christein JD, Wilcox CM. Equal efficacy of endoscopic and surgical cystogastrostomy for pancreatic pseudocyst drainage in a randomized trial. Gastroenterology. 2013;145(3):583-590.
Long-Term Cognitive Impairment after Critical Illness
Clinical question: Are a longer duration of delirium and higher doses of sedatives associated with cognitive impairment in the hospital?
Background: Survivors of critical illness are at risk for prolonged cognitive dysfunction. Delirium (and factors associated with delirium, namely sedative and analgesic medications) has been implicated in cognitive dysfunction.
Study design: Prospective cohort study.
Setting: Multi-center, academic, and acute care hospitals.
Synopsis: The study examined 821 adults admitted to the ICU with respiratory failure, cardiogenic shock, or septic shock. Patients excluded were those with pre-existing cognitive impairment, those with psychotic disorders, and those for whom follow-up would not be possible. Two risk factors measured were duration of delirium and use of sedative/analgesics. Delirium was assessed at three and 12 months using the CAM-ICU algorithm in the ICU by trained psychology professionals who were unaware of the patients’ in-hospital course.
At three months, 40% of patients had global cognition scores that were 1.5 standard deviations (SD) below population mean (similar to traumatic brain injury), and 26% had scores two SD below population mean (similar to mild Alzheimer’s). At 12 months, 34% had scores similar to traumatic brain injury patients, and 24% had scores similar to mild Alzheimer’s. A longer duration of delirium was associated with worse global cognition at three and 12 months. Use of sedatives/analgesics was not associated with cognitive impairment.
Bottom line: Critically ill patients in the ICU who experience a longer duration of delirium are at risk of long-term cognitive impairments lasting 12 months.
Citation: Pandharipande PP, Girard TD, Jackson JC, et al. Long-term cognitive impairment after critical illness. N Engl J Med. 2013;369(14):1306-1316.
Holding Chambers (Spacers) vs. Nebulizers for Acute Asthma
Clinical question: Are beta-2 agonists as effective when administered through a holding chamber (spacer) as they are when administered by a nebulizer?
Background: During an acute asthma attack, beta-2 agonists must be delivered to the peripheral airways. There has been considerable controversy regarding the use of a spacer compared with a nebulizer. Aside from admission rates and length of stay, factors taken into account include cost, maintenance of nebulizer machines, and infection control (potential of cross-infection via nebulizers).
Study design: Meta-analysis review of randomized controlled trials (RCTs).
Setting: Multi-centered, worldwide studies from community setting and EDs.
Synopsis: In 39 studies of patients with an acute asthma attack (selected from Cochrane Airways Group Specialized Register), the hospital admission rates did not differ on the basis of delivery method in 729 adults (risk ratio=0.94, confidence interval 0.61-1.43) or in 1,897 children (risk ratio=0.71, confidence interval 0.47-1.08). Secondary outcomes included the duration of time in the ED and the duration of hospital admission. Time spent in the ED varied for adults but was shorter for children with spacers (based on three studies). Duration of hospital admission also did not differ when modes of delivery were compared.
Bottom line: Providing beta-2 agonists using nebulizers during an acute asthma attack is not more effective than administration using a spacer.
Citation: Cates CJ, Welsh EJ, Rowe BH. Holding chambers (spacers) versus nebulisers for beta-agonist treatment of acute asthma. Cochrane Database Syst Rev. 2013;9:CD000052.
In This Edition
Literature At A Glance
A guide to this month’s studies
- ICU pressures improve transfers to the floor
- Morbidity, mortality rates high for respiratory syncytial virus infections
- Antibiotic algorithm can guide therapy in healthcare-associated pneumonia
- Three-month dual antiplatelet therapy for zotarolimus-eluting stents
- De-escalating antibiotics in sepsis
- New oral anticoagulants increase GI bleed risk
- Single vs. dual antiplatelet therapy after stroke
- Endoscopic vs. surgical cystogastrostomy for pancreatic pseudocyst drainage
- Long-term cognitive impairment after critical illness
- Holding chambers vs. nebulizers for acute asthma
ICU Pressures Improve Transfers to the Floor
Clinical question: Does ICU strain negatively affect the outcomes of patients transferred to the floor?
Background: With healthcare costs increasing and critical care staff shortages projected, ICUs will have to operate under increasing strain. This may influence decisions on discharging patients from ICUs and could affect patient outcomes.
Study design: Retrospective cohort study.
Setting: One hundred fifty-five ICUs in the United States.
Synopsis: Using the Project IMPACT database, 200,730 adult patients from 107 different hospitals were evaluated in times of ICU strain, determined by the current census, new admissions, and acuity level. Outcomes measured were initial ICU length of stay (LOS), readmission within 72 hours, in-hospital mortality rates, and post-ICU discharge LOS.
Increases of the strain variables from the fifth to the 95th percentiles resulted in a 6.3-hour reduction in ICU LOS, a 2.0-hour decrease in post-ICU discharge LOS, and a 1.0% increase in probability of ICU readmission within 72 hours. Mortality rates during the hospital stay and odds of being discharged home showed no significant change. This study was limited because the ICUs participating were not randomly chosen, outcomes of patients transferred to other hospitals were not measured, and no post-hospital data was collected, so no long-term outcomes could be measured.
Bottom line: ICU bed pressures prompt physicians to allocate ICU resources more efficiently without changing short-term patient outcomes.
Citation: Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455.
Adults Hospitalized for Respiratory Syncytial Virus Infections Have High Morbidity, Mortality Rates
Clinical question: What are the complications and outcomes of respiratory syncytial virus (RSV) infection in adults requiring hospitalization?
Background: RSV is a common cause of lower respiratory tract infection in infants and young children, leading to hospitalization and even death. RSV has been estimated to affect 3%-10% of adults annually, generally causing mild disease. However, the outcomes of adults with more severe disease are not fully known.
Study design: Retrospective cohort study.
Setting: Three acute care, public hospitals in Hong Kong.
Synopsis: All adult patients hospitalized with laboratory-confirmed RSV infection were included during the defined time period. The main outcome measure was all-cause death, with secondary outcome measures of development of acute respiratory failure requiring ventilator support and total duration of hospitalization among survivors. Additionally, the cohort of RSV patients was compared to patients admitted with seasonal influenza during this same time frame. Patients with pandemic 2009 H1N1 infection were not included.
Of patients with RSV, pneumonia was found in 42.3%, bacterial superinfection in 12.5%, and cardiovascular complications in 14.3%. Additionally, 11.1% developed respiratory failure requiring ventilator support. All-cause mortality at 30 days and 60 days was 9.1% and 11.9%, respectively, with pneumonia the most common cause of death. Use of systemic corticosteroids did not improve survival. When the RSV cohort was compared to the influenza cohort, the patients were similar in age, but the RSV patients were more likely to have underlying chronic lung disease and major systemic co-morbidities. The rate of survival and duration of hospitalization were not significantly different.
Bottom line: RSV infection is an underappreciated cause of lower tract respiratory infection in adults; severe infections that require hospitalization have rates of mortality similar to seasonal influenza. Further research on treatment or immunization is needed.
Citation: Lee N, Lui GC, Wong KT, et al. High morbidity and mortality in adults hospitalized for respiratory syncytial virus infections. Clin Infect Dis. 2013;57(8):1069-1077.
Antibiotic Algorithm Can Guide Therapy in Healthcare-Associated Pneumonia
Clinical question: Can an algorithm based on risk for multidrug-resistant (MDR) organisms and illness severity guide antibiotic selection in healthcare-associated pneumonia (HCAP)?
Background: The 2005 American Thoracic Society/Infectious Diseases Society of America (ATS/IDSA) guidelines identify patients with HCAP as those with recent contact with a healthcare environment, including nursing homes and hemodialysis; however, previous studies have shown that not all patients with healthcare contact have equal risk for MDR organisms.
Study design: Prospective cohort study.
Setting: Japan, multi-center.
Synopsis: Of the 445 enrolled patients, 124 were diagnosed with community-acquired pneumonia (CAP) and 321 with HCAP. Patients with HCAP were classified based on severity of illness or MDR pathogen risk factors (immune suppression, hospitalization within the last 90 days, poor functional status, and antibiotics within the past six months). Patients with low risk (0-1 factors) for MDR organisms were treated for CAP, and patients with high risk (≥2 factors) or moderate risk (≥1 factor) for severe illness were treated for HCAP.
HCAP patients had a higher 30-day mortality rate (13.7% vs. 5.6%, P=0.017), but mortality rate was less in the patients at low risk for MDR pathogens (8.6% vs. 18.2%, P=0.012). Of the HCAP patients, only 7.1% received inappropriate therapy (pathogen resistant to initial antibiotic regimen), and treatment failure was 19.3%.
Appropriateness of initial empiric therapy was determined not to be a mortality risk; however, this trial might be limited by its location, because Japan appears to have fewer MDR pathogens than the U.S.
Bottom line: A treatment algorithm based on risk for MDR organisms and severity of illness can be used to guide empiric antibiotic therapy in patients with HCAP, and, ideally, to reduce excessive use of broad-spectrum antibiotics.
Citation: Maruyama T, Fujisawa T, Okuno M, et al. A new strategy for healthcare-associated pneumonia: a 2-year prospective multicenter cohort study using risk factors for multidrug-resistant pathogens to select initial empiric therapy. Clin Infect Dis. 2013;57(10):1373-1383.
Three-Month Dual Antiplatelet Therapy for Zotarolimus-Eluting Stents
Clinical question: Is short-term, dual antiplatelet therapy noninferior to long-term therapy in zotarolimus-eluting stents?
Background: Current guidelines recommend long-term (>12 months) dual antiplatelet therapy after the placement of drug-eluting stents. The optimal therapy duration in second-generation drug-eluting stents has not been studied; moreover, some studies with multiple drug-eluting stents have suggested no added benefit from long-term therapy.
Study design: Randomized controlled trial.
Setting: Brazil, multi-center.
Synopsis: Researchers randomized 3,211 patients with stable coronary artery disease (CAD) or low-risk acute coronary syndrome (ACS) undergoing intervention with zotarolimus-eluting stents to short-term (three months) or long-term (12 months) dual antiplatelet therapy. Exclusion criteria included ST-elevation myocardial infarction (STEMI), previous drug-eluting stent, scheduled elective surgery within 12 months, or contraindication to aspirin or clopidogrel. Primary endpoints were a composite of death from any cause, MI, stroke, or major bleeding. Secondary endpoints were stent thrombosis, target lesion revascularization, adverse cardiac event, and any bleed.
At one-year follow-up, the short-term group had similar primary (6.0% vs. 5.8%) and secondary (8.3% vs. 7.4%) outcomes compared to the long-term. The short-term group’s noninferiority also was seen in several key subgroups.
This study included patients with stable CAD or low-risk ACS and cannot be generalized to higher-risk patients. Results for zotarolimus-eluting stents cannot be generalized to other second-generation drug-eluting stents.
Bottom line: Zotarolimus-eluting stents, followed by three months of dual antiplatelet therapy, were noninferior to 12 months of therapy in patients with stable CAD or low-risk ACS.
Citation: Feres F, Costa RA, Abizaid A, et al. Three vs. twelve months of dual antiplatelet therapy after zotarolimus-eluting stents: the OPTIMIZE randomized trial. JAMA. 2013;310(23):2510-2522.
De-Escalating Antibiotics in Sepsis
Clinical question: Does tailoring antibiotics based on known pathogens impact mortality for patients with severe sepsis or shock?
Background: In patients with sepsis, the use of early empiric antibiotics reduces morbidity and mortality. De-escalation therapy refers to narrowing the broad-spectrum antibiotics once the pathogen and sensitivities are known; however, no randomized controlled studies have assessed the impact of this therapy on critically ill patients.
Study design: Prospective observational study.
Setting: Academic hospital ICU in Spain.
Synopsis: From January 2008 to May 2012, 628 adult patients were treated empirically with broad-spectrum antibiotics. De-escalation was applied to 219 patients (34.9%). Outcomes measured were ICU mortality, hospital mortality, and 90-day mortality in patients who received de-escalation therapy, patients whose antibiotics were not changed, and patients for whom antibiotics were escalated.
The in-hospital mortality rate was 27.4% in patients who were de-escalated, 32.6% in the unchanged group, and 42.9% in the escalation group. ICU and 90-day mortality were lower in the de-escalation group. De-escalation was more commonly used in medical than in surgical patients.
This study is limited because it is not a randomized controlled study and was single-centered, so it might only be applicable on the larger scale. Also, multi-drug resistant organisms were not evaluated.
Overall, it is safe to narrow empiric antibiotics in severe sepsis and shock when the pathogen and sensitivities are known.
Bottom line: De-escalation of antibiotics in severe sepsis and septic shock is associated with a lower mortality.
Citation: Garnacho-Montero J, Gutierrez-Pizarraya A, Escoresca-Ortega A, et al. De-escalation of empirical therapy is associated with lower mortality in patients with severe sepsis and septic shock. Intensive Care Med. 2014;40(1):32-40.
New Oral Anticoagulants Increase GI Bleed Risk
Clinical question: Do thrombin and factor Xa inhibitors increase the risk of gastrointestinal (GI) bleeding when compared to vitamin K antagonists and heparins?
Background: New oral anticoagulants (thrombin and factor Xa inhibitors) are available and being used with increased frequency due to equal efficacy and ease of administration. Some studies indicate a higher risk of GI bleeding with these agents. Further evaluation is needed, because no reversal therapy is available.
Study design: Systematic review and meta-analysis.
Setting: Data from MEDLINE, Embase, and the Cochrane Library.
Synopsis: More than 150,000 patients from 43 randomized controlled trials were evaluated for risk of GI bleed when treated with new anticoagulants versus traditional therapy. Patients were treated for one of the following: embolism prevention from atrial fibrillation, venous thromboembolism (VTE) prophylaxis post orthopedic surgery, VTE prophylaxis of medical patients, acute VTE, and acute coronary syndrome (ACS). Use of aspirin or NSAIDs was discouraged but not documented. The odds ratio for GI bleeding with use of the new anticoagulants was 1.45, with a number needed to harm of 500. Evaluation of subgroups revealed increased GI bleed risk in patients treated for ACS and acute thrombosis versus prophylaxis. Post-surgical patients had the lowest risk.
This study was limited by the heterogeneity and differing primary outcomes (mostly efficacy rather than safety) of the included trials. Studies excluded high-risk patients, which the authors estimate to be 25%-40% of actual patients. More studies need to be done that include high-risk patients and focus on GI bleed as a primary outcome.
Bottom line: The new anticoagulants tend to have a higher incidence of GI bleed than traditional therapy, but this varies based on indication of therapy and needs further evaluation to clarify risk.
Citation: Holster IL, Valkhoff VE, Kuipers EJ, Tjwa ET. New oral anticoagulants increase risk for gastrointestinal bleeding: a systematic review and meta-analysis. Gastroenterology. 2013;145(1):105-112.
Single vs. Dual Antiplatelet Therapy after Stroke
Clinical question: Is dual antiplatelet therapy more beneficial or harmful than monotherapy after ischemic stroke?
Background: It is recommended that patients with ischemic stroke or transient ischemic attack (TIA) receive lifelong antiplatelet therapy; however, there have been insufficient studies evaluating the long-term safety of dual antiplatelet therapy.
Study design: Meta-analysis of randomized controlled trials (RCTs)
Setting: Data from PubMed, Embase, and the Cochrane Central Register of Controlled Trials.
Synopsis: Data from seven RCTs, including 39,574 patients with recent TIA or ischemic stroke, were reviewed. Comparisons were made regarding occurrence of intracranial hemorrhage (ICH) and recurrent stroke between patients receiving dual antiplatelet therapy and those receiving aspirin or clopidogrel monotherapy. All patients were treated for at least one year.
There was no difference in recurrent stroke or ICH between patients on dual antiplatelet therapy versus aspirin monotherapy. Patients treated with dual antiplatelet therapy did have a 46% increased risk of ICH without any additional protective benefit for recurrent stroke or TIA when compared with patients on clopidogrel monotherapy.
This information should not be applied in the acute setting, given the high risk of stroke after TIA or ischemic stroke. One major limitation of this study was that the individual trials used different combinations of dual antiplatelet therapy.
Bottom line: The risk of recurrent stroke or TIA after dual antiplatelet therapy and after monotherapy with aspirin or clopidogrel is equal, but the risk of ICH compared to clopidogrel monotherapy is increased.
Citation: Lee M, Saver JL, Hong KS, Rao NM, Wu YL, Ovbiagele B. Risk-benefit profile of long-term dual- versus single-antiplatelet therapy among patients with ischemic stroke: a systematic review and meta-analysis. Ann Intern Med. 2013;159(7):463-470.
Endoscopic vs. Surgical Cystogastrostomy for Pancreatic Pseudocyst Drainage
Clinical question: How does endoscopic cystogastrostomy for pancreatic pseudocyst drainage compare to the standard surgical approach?
Background: Pancreatic pseudocysts are a common complication of pancreatitis and necessitate decompression when they are accompanied by pain, infection, or obstruction. Decompression of the pseudocyst can be accomplished using either endoscopic or surgical cystogastrostomy.
Study design: Open-label, single-center, randomized trial.
Setting: Single-center U.S. hospital.
Synopsis: A total of 40 patients were randomly equalized to both treatment arms; 20 patients underwent endoscopic and 20 patients underwent surgical cystogastrostomy. Zero patients in the endoscopic therapy had a pseudocyst recurrence, compared with one patient treated surgically. Length of stay (LOS) and cost were lower for the endoscopic group compared to the surgical group (two days vs. six days, P<0.001, $7,011 vs. $15,052, P=0.003).
This study is limited due to several factors. First, patients with pancreatic necrosis were excluded; had these patients been included, the complication rates and LOS would have been higher. Second, cost difference cannot be generalized across the U.S., because Medicare payments are based on provider types and regions.
Bottom line: Endoscopic cystogastrostomy for pancreatic pseudocyst is equal to the standard surgical therapy and results in decreased LOS and reduced costs.
Citation: Varadarajulu S, Bang JY, Sutton BS, Trevino JM, Christein JD, Wilcox CM. Equal efficacy of endoscopic and surgical cystogastrostomy for pancreatic pseudocyst drainage in a randomized trial. Gastroenterology. 2013;145(3):583-590.
Long-Term Cognitive Impairment after Critical Illness
Clinical question: Are a longer duration of delirium and higher doses of sedatives associated with cognitive impairment in the hospital?
Background: Survivors of critical illness are at risk for prolonged cognitive dysfunction. Delirium (and factors associated with delirium, namely sedative and analgesic medications) has been implicated in cognitive dysfunction.
Study design: Prospective cohort study.
Setting: Multi-center, academic, and acute care hospitals.
Synopsis: The study examined 821 adults admitted to the ICU with respiratory failure, cardiogenic shock, or septic shock. Patients excluded were those with pre-existing cognitive impairment, those with psychotic disorders, and those for whom follow-up would not be possible. Two risk factors measured were duration of delirium and use of sedative/analgesics. Delirium was assessed at three and 12 months using the CAM-ICU algorithm in the ICU by trained psychology professionals who were unaware of the patients’ in-hospital course.
At three months, 40% of patients had global cognition scores that were 1.5 standard deviations (SD) below population mean (similar to traumatic brain injury), and 26% had scores two SD below population mean (similar to mild Alzheimer’s). At 12 months, 34% had scores similar to traumatic brain injury patients, and 24% had scores similar to mild Alzheimer’s. A longer duration of delirium was associated with worse global cognition at three and 12 months. Use of sedatives/analgesics was not associated with cognitive impairment.
Bottom line: Critically ill patients in the ICU who experience a longer duration of delirium are at risk of long-term cognitive impairments lasting 12 months.
Citation: Pandharipande PP, Girard TD, Jackson JC, et al. Long-term cognitive impairment after critical illness. N Engl J Med. 2013;369(14):1306-1316.
Holding Chambers (Spacers) vs. Nebulizers for Acute Asthma
Clinical question: Are beta-2 agonists as effective when administered through a holding chamber (spacer) as they are when administered by a nebulizer?
Background: During an acute asthma attack, beta-2 agonists must be delivered to the peripheral airways. There has been considerable controversy regarding the use of a spacer compared with a nebulizer. Aside from admission rates and length of stay, factors taken into account include cost, maintenance of nebulizer machines, and infection control (potential of cross-infection via nebulizers).
Study design: Meta-analysis review of randomized controlled trials (RCTs).
Setting: Multi-centered, worldwide studies from community setting and EDs.
Synopsis: In 39 studies of patients with an acute asthma attack (selected from Cochrane Airways Group Specialized Register), the hospital admission rates did not differ on the basis of delivery method in 729 adults (risk ratio=0.94, confidence interval 0.61-1.43) or in 1,897 children (risk ratio=0.71, confidence interval 0.47-1.08). Secondary outcomes included the duration of time in the ED and the duration of hospital admission. Time spent in the ED varied for adults but was shorter for children with spacers (based on three studies). Duration of hospital admission also did not differ when modes of delivery were compared.
Bottom line: Providing beta-2 agonists using nebulizers during an acute asthma attack is not more effective than administration using a spacer.
Citation: Cates CJ, Welsh EJ, Rowe BH. Holding chambers (spacers) versus nebulisers for beta-agonist treatment of acute asthma. Cochrane Database Syst Rev. 2013;9:CD000052.