User login
Predictors for Surgical Management of Small Bowel Obstruction
Clinical question: Are there clinical or computerized tomography (CT) findings that identify which patients will need early surgical management in adhesive small bowel obstruction (ASBO)?
Background: Previous studies determined adverse outcomes resulting from delayed surgery in patients with ASBO: increased length of stay (LOS), complications, and mortality. Most patients respond to nonoperative management, however.
Study design: Prospective observational study.
Setting: Three academic and tertiary referral medical centers.
Synopsis: Using multivariate analysis of 202 patients admitted with presumed adhesive ASBO without immediate surgical need, of whom 52 required eventual surgical intervention, this study found three predictors for needing operative care: no flatus (odds ratio [OR], 3.28; 95% confidence interval [CI], 1.51-7.12; P=0.003), as well as the CT findings of a high-grade obstruction, defined as only minimal passage of air and fluid into the distal small bowel or colon (OR, 2.44; 95% CI, 1.10-5.43; P=0.028) or the presence of free fluid (OR, 2.59; 95% CI, 1.13-5.90; P=0.023).
Despite these associations, clinicians should not view these findings as indications for surgery. Of the patients who responded to nonoperative management, one-third had no flatus, and on CT one-third had high-grade obstruction and half had free fluid. Instead, because patients with these findings are at an increased risk of failing nonoperative management, they should be observed more closely and reassessed more frequently.
Bottom line: Patients without flatus or with the presence of free fluid or high-grade obstruction on CT are at an increased risk of requiring surgical management for ASBO.
Citation: Kulvatunyou N, Pandit V, Moutamn S, et al. A multi-institution prospective observational study of small bowel obstruction: clinical and computerized tomography predictors of which patients may require early surgery. J Trauma Acute Care Surg. 2015;79(3):393-398.
Clinical question: Are there clinical or computerized tomography (CT) findings that identify which patients will need early surgical management in adhesive small bowel obstruction (ASBO)?
Background: Previous studies determined adverse outcomes resulting from delayed surgery in patients with ASBO: increased length of stay (LOS), complications, and mortality. Most patients respond to nonoperative management, however.
Study design: Prospective observational study.
Setting: Three academic and tertiary referral medical centers.
Synopsis: Using multivariate analysis of 202 patients admitted with presumed adhesive ASBO without immediate surgical need, of whom 52 required eventual surgical intervention, this study found three predictors for needing operative care: no flatus (odds ratio [OR], 3.28; 95% confidence interval [CI], 1.51-7.12; P=0.003), as well as the CT findings of a high-grade obstruction, defined as only minimal passage of air and fluid into the distal small bowel or colon (OR, 2.44; 95% CI, 1.10-5.43; P=0.028) or the presence of free fluid (OR, 2.59; 95% CI, 1.13-5.90; P=0.023).
Despite these associations, clinicians should not view these findings as indications for surgery. Of the patients who responded to nonoperative management, one-third had no flatus, and on CT one-third had high-grade obstruction and half had free fluid. Instead, because patients with these findings are at an increased risk of failing nonoperative management, they should be observed more closely and reassessed more frequently.
Bottom line: Patients without flatus or with the presence of free fluid or high-grade obstruction on CT are at an increased risk of requiring surgical management for ASBO.
Citation: Kulvatunyou N, Pandit V, Moutamn S, et al. A multi-institution prospective observational study of small bowel obstruction: clinical and computerized tomography predictors of which patients may require early surgery. J Trauma Acute Care Surg. 2015;79(3):393-398.
Clinical question: Are there clinical or computerized tomography (CT) findings that identify which patients will need early surgical management in adhesive small bowel obstruction (ASBO)?
Background: Previous studies determined adverse outcomes resulting from delayed surgery in patients with ASBO: increased length of stay (LOS), complications, and mortality. Most patients respond to nonoperative management, however.
Study design: Prospective observational study.
Setting: Three academic and tertiary referral medical centers.
Synopsis: Using multivariate analysis of 202 patients admitted with presumed adhesive ASBO without immediate surgical need, of whom 52 required eventual surgical intervention, this study found three predictors for needing operative care: no flatus (odds ratio [OR], 3.28; 95% confidence interval [CI], 1.51-7.12; P=0.003), as well as the CT findings of a high-grade obstruction, defined as only minimal passage of air and fluid into the distal small bowel or colon (OR, 2.44; 95% CI, 1.10-5.43; P=0.028) or the presence of free fluid (OR, 2.59; 95% CI, 1.13-5.90; P=0.023).
Despite these associations, clinicians should not view these findings as indications for surgery. Of the patients who responded to nonoperative management, one-third had no flatus, and on CT one-third had high-grade obstruction and half had free fluid. Instead, because patients with these findings are at an increased risk of failing nonoperative management, they should be observed more closely and reassessed more frequently.
Bottom line: Patients without flatus or with the presence of free fluid or high-grade obstruction on CT are at an increased risk of requiring surgical management for ASBO.
Citation: Kulvatunyou N, Pandit V, Moutamn S, et al. A multi-institution prospective observational study of small bowel obstruction: clinical and computerized tomography predictors of which patients may require early surgery. J Trauma Acute Care Surg. 2015;79(3):393-398.
Adding Advanced Molecular Techniques to Standard Blood Cultures May Improve Patient Outcomes
Clinical question: Does the addition of rapid multiplex polymerase chain reaction molecular techniques to standard blood culture bottle (BCB) processing, with or without antimicrobial stewardship recommendations, affect antimicrobial utilization and patient outcomes?
Background: Standard BCB processing typically requires two days to provide identification and susceptibility testing results. PCR-based molecular testing is available to test positive BCB and deliver specific susceptibility results more rapidly, typically within one hour. Earlier results could improve antimicrobial utilization, limit antimicrobial resistance, decrease the risk of Clostridium difficile colitis, improve patient outcomes, and decrease healthcare costs. The impact of these techniques on outcomes is uncertain.
Study design: Prospective, randomized controlled trial (RCT).
Setting: Single large tertiary academic medical center.
Synopsis: Nearly 750 patients were randomized to conventional BCB processing (control), BCB with rapid multiplex PCR and templated recommendations (rmPCR), or BCB with rapid multiplex PCR and real-time antimicrobial stewardship provided by an infectious disease physician or specially trained pharmacist (rmPCR/AS). Time to microorganism identification was reduced from 22.3 hours in the control arm to 1.3 hours in the intervention arms. Both intervention groups had decreased use of broad spectrum piperacillin-tazobactam, increased use of narrow spectrum β-lactam, and decreased treatment of contaminants. Time to appropriate empiric treatment modification was shortest in the rmPCR/AS group.
Groups did not differ in mortality, length of stay, or cost, although an adequately powered study may show beneficial effects in these outcomes.
Bottom line: The addition of rapid multiplex PCR, ideally combined with antimicrobial stewardship, improves antimicrobial utilization in patients with positive blood cultures.
Citation: Banerjee R, Teng CB, Cunningham SA, et al. Randomized trial of rapid multiplex polymerase chain reaction-based blood culture identification and susceptibility testing. Clin Infect Dis. 2015;61(7):1071-1080.
Clinical question: Does the addition of rapid multiplex polymerase chain reaction molecular techniques to standard blood culture bottle (BCB) processing, with or without antimicrobial stewardship recommendations, affect antimicrobial utilization and patient outcomes?
Background: Standard BCB processing typically requires two days to provide identification and susceptibility testing results. PCR-based molecular testing is available to test positive BCB and deliver specific susceptibility results more rapidly, typically within one hour. Earlier results could improve antimicrobial utilization, limit antimicrobial resistance, decrease the risk of Clostridium difficile colitis, improve patient outcomes, and decrease healthcare costs. The impact of these techniques on outcomes is uncertain.
Study design: Prospective, randomized controlled trial (RCT).
Setting: Single large tertiary academic medical center.
Synopsis: Nearly 750 patients were randomized to conventional BCB processing (control), BCB with rapid multiplex PCR and templated recommendations (rmPCR), or BCB with rapid multiplex PCR and real-time antimicrobial stewardship provided by an infectious disease physician or specially trained pharmacist (rmPCR/AS). Time to microorganism identification was reduced from 22.3 hours in the control arm to 1.3 hours in the intervention arms. Both intervention groups had decreased use of broad spectrum piperacillin-tazobactam, increased use of narrow spectrum β-lactam, and decreased treatment of contaminants. Time to appropriate empiric treatment modification was shortest in the rmPCR/AS group.
Groups did not differ in mortality, length of stay, or cost, although an adequately powered study may show beneficial effects in these outcomes.
Bottom line: The addition of rapid multiplex PCR, ideally combined with antimicrobial stewardship, improves antimicrobial utilization in patients with positive blood cultures.
Citation: Banerjee R, Teng CB, Cunningham SA, et al. Randomized trial of rapid multiplex polymerase chain reaction-based blood culture identification and susceptibility testing. Clin Infect Dis. 2015;61(7):1071-1080.
Clinical question: Does the addition of rapid multiplex polymerase chain reaction molecular techniques to standard blood culture bottle (BCB) processing, with or without antimicrobial stewardship recommendations, affect antimicrobial utilization and patient outcomes?
Background: Standard BCB processing typically requires two days to provide identification and susceptibility testing results. PCR-based molecular testing is available to test positive BCB and deliver specific susceptibility results more rapidly, typically within one hour. Earlier results could improve antimicrobial utilization, limit antimicrobial resistance, decrease the risk of Clostridium difficile colitis, improve patient outcomes, and decrease healthcare costs. The impact of these techniques on outcomes is uncertain.
Study design: Prospective, randomized controlled trial (RCT).
Setting: Single large tertiary academic medical center.
Synopsis: Nearly 750 patients were randomized to conventional BCB processing (control), BCB with rapid multiplex PCR and templated recommendations (rmPCR), or BCB with rapid multiplex PCR and real-time antimicrobial stewardship provided by an infectious disease physician or specially trained pharmacist (rmPCR/AS). Time to microorganism identification was reduced from 22.3 hours in the control arm to 1.3 hours in the intervention arms. Both intervention groups had decreased use of broad spectrum piperacillin-tazobactam, increased use of narrow spectrum β-lactam, and decreased treatment of contaminants. Time to appropriate empiric treatment modification was shortest in the rmPCR/AS group.
Groups did not differ in mortality, length of stay, or cost, although an adequately powered study may show beneficial effects in these outcomes.
Bottom line: The addition of rapid multiplex PCR, ideally combined with antimicrobial stewardship, improves antimicrobial utilization in patients with positive blood cultures.
Citation: Banerjee R, Teng CB, Cunningham SA, et al. Randomized trial of rapid multiplex polymerase chain reaction-based blood culture identification and susceptibility testing. Clin Infect Dis. 2015;61(7):1071-1080.
Some Readmission Risk Factors Not Captured by Medicare
Clinical question: Are there patient characteristics not currently measured by the Medicare readmission program that account for differences in hospital readmission rates?
Background: The Medicare Hospital Readmissions Reduction Program (HRRP) financially penalizes hospitals with higher than expected 30-day readmission rates. During 2014, more than 2,000 U.S. hospitals were fined $480 million for high readmission rates. HRRP accounts for differences in patient age, gender, discharge diagnosis, and diagnoses identified in Medicare claims over the previous 12 months; however, the impact of other factors is uncertain.
Study design: Survey data from the Health and Retirement Study, with linked Medicare claims.
Setting: Community-dwelling U.S. adults, older than 50 years.
Synopsis: Investigators analyzed more than 33,000 admissions from 2000 to 2012. They found 22 patient characteristics not included in the HRRP calculation that were statistically significantly predictive of hospital-wide, 30-day readmission and were more likely to be present among patients cared for in hospitals in the highest quintile of readmission rates. These characteristics reduced by 48% the differences in readmission rate between the highest- and lowest-performing quintiles. Examples include patient ethnicity, education level, personal as well as household income level, presence of prescription drug plan, Medicaid enrollment, cognitive status, and numerous others.
Bottom line: Patient characteristics account for much of the difference in readmission rates between high- and low-performing hospitals, suggesting that HRRP penalties reflect who hospitals treat as much as how well they treat them.
Citation: Barnett ML, Hsu J, McWilliams JM. Patient characteristics and differences in hospital readmission rates. JAMA Intern Med. 2015;175(11):1803-1812.
Clinical question: Are there patient characteristics not currently measured by the Medicare readmission program that account for differences in hospital readmission rates?
Background: The Medicare Hospital Readmissions Reduction Program (HRRP) financially penalizes hospitals with higher than expected 30-day readmission rates. During 2014, more than 2,000 U.S. hospitals were fined $480 million for high readmission rates. HRRP accounts for differences in patient age, gender, discharge diagnosis, and diagnoses identified in Medicare claims over the previous 12 months; however, the impact of other factors is uncertain.
Study design: Survey data from the Health and Retirement Study, with linked Medicare claims.
Setting: Community-dwelling U.S. adults, older than 50 years.
Synopsis: Investigators analyzed more than 33,000 admissions from 2000 to 2012. They found 22 patient characteristics not included in the HRRP calculation that were statistically significantly predictive of hospital-wide, 30-day readmission and were more likely to be present among patients cared for in hospitals in the highest quintile of readmission rates. These characteristics reduced by 48% the differences in readmission rate between the highest- and lowest-performing quintiles. Examples include patient ethnicity, education level, personal as well as household income level, presence of prescription drug plan, Medicaid enrollment, cognitive status, and numerous others.
Bottom line: Patient characteristics account for much of the difference in readmission rates between high- and low-performing hospitals, suggesting that HRRP penalties reflect who hospitals treat as much as how well they treat them.
Citation: Barnett ML, Hsu J, McWilliams JM. Patient characteristics and differences in hospital readmission rates. JAMA Intern Med. 2015;175(11):1803-1812.
Clinical question: Are there patient characteristics not currently measured by the Medicare readmission program that account for differences in hospital readmission rates?
Background: The Medicare Hospital Readmissions Reduction Program (HRRP) financially penalizes hospitals with higher than expected 30-day readmission rates. During 2014, more than 2,000 U.S. hospitals were fined $480 million for high readmission rates. HRRP accounts for differences in patient age, gender, discharge diagnosis, and diagnoses identified in Medicare claims over the previous 12 months; however, the impact of other factors is uncertain.
Study design: Survey data from the Health and Retirement Study, with linked Medicare claims.
Setting: Community-dwelling U.S. adults, older than 50 years.
Synopsis: Investigators analyzed more than 33,000 admissions from 2000 to 2012. They found 22 patient characteristics not included in the HRRP calculation that were statistically significantly predictive of hospital-wide, 30-day readmission and were more likely to be present among patients cared for in hospitals in the highest quintile of readmission rates. These characteristics reduced by 48% the differences in readmission rate between the highest- and lowest-performing quintiles. Examples include patient ethnicity, education level, personal as well as household income level, presence of prescription drug plan, Medicaid enrollment, cognitive status, and numerous others.
Bottom line: Patient characteristics account for much of the difference in readmission rates between high- and low-performing hospitals, suggesting that HRRP penalties reflect who hospitals treat as much as how well they treat them.
Citation: Barnett ML, Hsu J, McWilliams JM. Patient characteristics and differences in hospital readmission rates. JAMA Intern Med. 2015;175(11):1803-1812.
Unassigned, Undocumented Patients Take a Toll on Healthcare and Hospitalists
When a patient must remain in the acute care hospital setting—despite being well enough to transition to a lower level of care, costs continue to mount as the patient receives care at the most expensive level.
“But policymakers must understand that reducing support for essential hospitals might save dollars in the short term but ultimately threatens access to care and creates greater costs in the long run,” says Beth Feldpush, DrPH, senior vice president of policy and advocacy for America’s Essential Hospitals in Washington, D.C. The group represents more than 250 essential hospitals, which fill a safety net role and provide communitywide services, such as trauma, neonatal intensive care, and disaster response.
“Our hospitals, which already operate at a loss on average, cannot continue to sustain federal and state funding cuts,” Dr. Feldpush says. “Access to care for vulnerable patients and entire communities will suffer if we continue to chip away at crucial sources of support, such as Medicaid and Medicare disproportionate share hospital funding and payment for outpatient services.”
The Affordable Care Act (ACA) makes many changes to the healthcare system that are designed to improve the quality, value of, and access to healthcare services.
“While many are good in theory, they have faced challenges in practice,” Dr. Feldpush says.
For example, the law’s authors included deep cuts to Medicaid and Medicare disproportionate share hospital (DSH) payments, which support hospitals that provide a large volume of uncompensated care. They made these cuts with the assumption that Medicare expansion and the ACA health insurance marketplace would significantly increase coverage, lessening the need for DSH payments. The U.S. Supreme Court’s decision to give states the option of expanding Medicaid has resulted in expansion in only about half of the states, however.
“But the DSH cuts remain, meaning our hospitals are getting significantly less support for the same or more uncompensated care,” Dr. Feldpush says.
Likewise, the ACA put into place many quality incentive programs for Medicare, including those designed to reduce preventable readmissions and hospital-acquired conditions and to encourage more value-based purchasing.
“The goals are obviously good ones, but the quality measures used to calculate incentive payments or penalties fail to account for the sociodemographic challenges our patients face—and that our hospitals can’t control,” she says. “So, these programs disproportionately penalize our hospitals, which, in turn, creates a vicious circle that reduces the funding they need to make improvements.”
Access to equitable healthcare for low-income, uninsured, and other vulnerable patients is a national problem, Dr. Feldpush continues. But the severity of the problem can vary by community and region—in states that have chosen not to expand their Medicaid programs, for example, or in economically depressed areas. TH
When a patient must remain in the acute care hospital setting—despite being well enough to transition to a lower level of care, costs continue to mount as the patient receives care at the most expensive level.
“But policymakers must understand that reducing support for essential hospitals might save dollars in the short term but ultimately threatens access to care and creates greater costs in the long run,” says Beth Feldpush, DrPH, senior vice president of policy and advocacy for America’s Essential Hospitals in Washington, D.C. The group represents more than 250 essential hospitals, which fill a safety net role and provide communitywide services, such as trauma, neonatal intensive care, and disaster response.
“Our hospitals, which already operate at a loss on average, cannot continue to sustain federal and state funding cuts,” Dr. Feldpush says. “Access to care for vulnerable patients and entire communities will suffer if we continue to chip away at crucial sources of support, such as Medicaid and Medicare disproportionate share hospital funding and payment for outpatient services.”
The Affordable Care Act (ACA) makes many changes to the healthcare system that are designed to improve the quality, value of, and access to healthcare services.
“While many are good in theory, they have faced challenges in practice,” Dr. Feldpush says.
For example, the law’s authors included deep cuts to Medicaid and Medicare disproportionate share hospital (DSH) payments, which support hospitals that provide a large volume of uncompensated care. They made these cuts with the assumption that Medicare expansion and the ACA health insurance marketplace would significantly increase coverage, lessening the need for DSH payments. The U.S. Supreme Court’s decision to give states the option of expanding Medicaid has resulted in expansion in only about half of the states, however.
“But the DSH cuts remain, meaning our hospitals are getting significantly less support for the same or more uncompensated care,” Dr. Feldpush says.
Likewise, the ACA put into place many quality incentive programs for Medicare, including those designed to reduce preventable readmissions and hospital-acquired conditions and to encourage more value-based purchasing.
“The goals are obviously good ones, but the quality measures used to calculate incentive payments or penalties fail to account for the sociodemographic challenges our patients face—and that our hospitals can’t control,” she says. “So, these programs disproportionately penalize our hospitals, which, in turn, creates a vicious circle that reduces the funding they need to make improvements.”
Access to equitable healthcare for low-income, uninsured, and other vulnerable patients is a national problem, Dr. Feldpush continues. But the severity of the problem can vary by community and region—in states that have chosen not to expand their Medicaid programs, for example, or in economically depressed areas. TH
When a patient must remain in the acute care hospital setting—despite being well enough to transition to a lower level of care, costs continue to mount as the patient receives care at the most expensive level.
“But policymakers must understand that reducing support for essential hospitals might save dollars in the short term but ultimately threatens access to care and creates greater costs in the long run,” says Beth Feldpush, DrPH, senior vice president of policy and advocacy for America’s Essential Hospitals in Washington, D.C. The group represents more than 250 essential hospitals, which fill a safety net role and provide communitywide services, such as trauma, neonatal intensive care, and disaster response.
“Our hospitals, which already operate at a loss on average, cannot continue to sustain federal and state funding cuts,” Dr. Feldpush says. “Access to care for vulnerable patients and entire communities will suffer if we continue to chip away at crucial sources of support, such as Medicaid and Medicare disproportionate share hospital funding and payment for outpatient services.”
The Affordable Care Act (ACA) makes many changes to the healthcare system that are designed to improve the quality, value of, and access to healthcare services.
“While many are good in theory, they have faced challenges in practice,” Dr. Feldpush says.
For example, the law’s authors included deep cuts to Medicaid and Medicare disproportionate share hospital (DSH) payments, which support hospitals that provide a large volume of uncompensated care. They made these cuts with the assumption that Medicare expansion and the ACA health insurance marketplace would significantly increase coverage, lessening the need for DSH payments. The U.S. Supreme Court’s decision to give states the option of expanding Medicaid has resulted in expansion in only about half of the states, however.
“But the DSH cuts remain, meaning our hospitals are getting significantly less support for the same or more uncompensated care,” Dr. Feldpush says.
Likewise, the ACA put into place many quality incentive programs for Medicare, including those designed to reduce preventable readmissions and hospital-acquired conditions and to encourage more value-based purchasing.
“The goals are obviously good ones, but the quality measures used to calculate incentive payments or penalties fail to account for the sociodemographic challenges our patients face—and that our hospitals can’t control,” she says. “So, these programs disproportionately penalize our hospitals, which, in turn, creates a vicious circle that reduces the funding they need to make improvements.”
Access to equitable healthcare for low-income, uninsured, and other vulnerable patients is a national problem, Dr. Feldpush continues. But the severity of the problem can vary by community and region—in states that have chosen not to expand their Medicaid programs, for example, or in economically depressed areas. TH
Heparin Bridging for Atrial Fibrillation
Clinical question: In patients with atrial fibrillation (AF) or flutter, is heparin bridging needed during interruption of warfarin therapy for surgery or invasive procedures?
Background: Bridging is intended to decrease the risk of stroke or other arterial thromboembolism by minimizing time off anticoagulation. Bridging may increase the risk of serious bleeding, offsetting any benefit. Guidelines have provided weak and inconsistent recommendations due to a lack of randomized trials.
Study design: Randomized, double blind, placebo-controlled trial.
Setting: More than 100 centers in the U.S. and Canada, from 2009-2014.
Synopsis: Investigators randomized 1,884 patients on warfarin with a CHADS2 risk factor of one or higher undergoing elective surgery or procedure to dalteparin or placebo, from three days to 24 hours before the procedure and for five to 10 days after. Mean CHADS2 score was 2.3; 3% of patients had scores of five to six. Approximately one-third of patients were on aspirin, and most procedures (89%) were classified as minor. Patients with mechanical heart valves, stroke/transient ischemic attack (TIA)/systemic embolization within 12 weeks, major bleeding within six weeks, renal insufficiency, thrombocytopenia or planned cardiac, intracranial, or intraspinal surgery were excluded.
Thirty-day incidence of arterial thromboembolism (stroke, TIA, systemic embolization) was 0.4% in the non-bridging group and 0.3% in the bridging group (P=0.01 for noninferiority). Patients suffering arterial thromboembolism had mean CHADS2 scores of 2.6; most events occurred after minor procedures. Major bleeding was less common with no bridge (1.3% vs. 3.2%, relative risk 0.41, P=0.005 for superiority).
In this trial, most patients underwent minor procedures and few CHADS2 5-6 patients were enrolled; however, this well-designed, randomized trial adds important evidence to existing observational data.
Bottom line: Bridging is not warranted for most AF patients with CHADS2 scores of four or lower, at least for low-risk procedures.
Citation: Douketis JD, Spyropoulos AC, Kaatz S, et al. Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823-833.
Clinical question: In patients with atrial fibrillation (AF) or flutter, is heparin bridging needed during interruption of warfarin therapy for surgery or invasive procedures?
Background: Bridging is intended to decrease the risk of stroke or other arterial thromboembolism by minimizing time off anticoagulation. Bridging may increase the risk of serious bleeding, offsetting any benefit. Guidelines have provided weak and inconsistent recommendations due to a lack of randomized trials.
Study design: Randomized, double blind, placebo-controlled trial.
Setting: More than 100 centers in the U.S. and Canada, from 2009-2014.
Synopsis: Investigators randomized 1,884 patients on warfarin with a CHADS2 risk factor of one or higher undergoing elective surgery or procedure to dalteparin or placebo, from three days to 24 hours before the procedure and for five to 10 days after. Mean CHADS2 score was 2.3; 3% of patients had scores of five to six. Approximately one-third of patients were on aspirin, and most procedures (89%) were classified as minor. Patients with mechanical heart valves, stroke/transient ischemic attack (TIA)/systemic embolization within 12 weeks, major bleeding within six weeks, renal insufficiency, thrombocytopenia or planned cardiac, intracranial, or intraspinal surgery were excluded.
Thirty-day incidence of arterial thromboembolism (stroke, TIA, systemic embolization) was 0.4% in the non-bridging group and 0.3% in the bridging group (P=0.01 for noninferiority). Patients suffering arterial thromboembolism had mean CHADS2 scores of 2.6; most events occurred after minor procedures. Major bleeding was less common with no bridge (1.3% vs. 3.2%, relative risk 0.41, P=0.005 for superiority).
In this trial, most patients underwent minor procedures and few CHADS2 5-6 patients were enrolled; however, this well-designed, randomized trial adds important evidence to existing observational data.
Bottom line: Bridging is not warranted for most AF patients with CHADS2 scores of four or lower, at least for low-risk procedures.
Citation: Douketis JD, Spyropoulos AC, Kaatz S, et al. Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823-833.
Clinical question: In patients with atrial fibrillation (AF) or flutter, is heparin bridging needed during interruption of warfarin therapy for surgery or invasive procedures?
Background: Bridging is intended to decrease the risk of stroke or other arterial thromboembolism by minimizing time off anticoagulation. Bridging may increase the risk of serious bleeding, offsetting any benefit. Guidelines have provided weak and inconsistent recommendations due to a lack of randomized trials.
Study design: Randomized, double blind, placebo-controlled trial.
Setting: More than 100 centers in the U.S. and Canada, from 2009-2014.
Synopsis: Investigators randomized 1,884 patients on warfarin with a CHADS2 risk factor of one or higher undergoing elective surgery or procedure to dalteparin or placebo, from three days to 24 hours before the procedure and for five to 10 days after. Mean CHADS2 score was 2.3; 3% of patients had scores of five to six. Approximately one-third of patients were on aspirin, and most procedures (89%) were classified as minor. Patients with mechanical heart valves, stroke/transient ischemic attack (TIA)/systemic embolization within 12 weeks, major bleeding within six weeks, renal insufficiency, thrombocytopenia or planned cardiac, intracranial, or intraspinal surgery were excluded.
Thirty-day incidence of arterial thromboembolism (stroke, TIA, systemic embolization) was 0.4% in the non-bridging group and 0.3% in the bridging group (P=0.01 for noninferiority). Patients suffering arterial thromboembolism had mean CHADS2 scores of 2.6; most events occurred after minor procedures. Major bleeding was less common with no bridge (1.3% vs. 3.2%, relative risk 0.41, P=0.005 for superiority).
In this trial, most patients underwent minor procedures and few CHADS2 5-6 patients were enrolled; however, this well-designed, randomized trial adds important evidence to existing observational data.
Bottom line: Bridging is not warranted for most AF patients with CHADS2 scores of four or lower, at least for low-risk procedures.
Citation: Douketis JD, Spyropoulos AC, Kaatz S, et al. Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823-833.
LISTEN NOW: Scott Sears, MD, Discusses Hospitalist Challenges with Unassigned, Uninsured Patients
Modified Valsalva Better than Standard Maneuver to Restore Sinus Rhythm
Clinical question: Does a postural modification to the Valsalva maneuver improve its effectiveness?
Background: The Valsalva maneuver, often used to treat supraventricular tachycardia, is rarely successful. A modification to the maneuver to increase relaxation phase venous return and vagal stimulation could improve its efficacy.
Study design: Multicenter, randomized controlled trial (RCT).
Setting: Ten emergency departments in England.
Synopsis: Four hundred thirty-three patients with stable supraventricular tachycardia (excluding atrial fibrillation or flutter) were randomized to use the Valsalva maneuver (control) or modified Valsalva maneuver (intervention). In the control group, strain was standardized using a manometer (40 mm Hg for 15 seconds). In the intervention group, patients underwent the same maneuver, followed by lying supine with passive leg raise to 45 degrees for 15 seconds. Participants could repeat the maneuver if it was initially unsuccessful. Randomization was stratified by center.
Using an intention-to-treat analysis, 43% of the intervention group achieved the primary outcome of sinus rhythm one minute after straining, compared with 17% of the control group (P<0.0001). The intervention group was less likely to receive adenosine (50% vs. 69%, P=0.0002) or any emergency, anti-arrhythmic treatment (80% vs. 57%, P<0.0001).
No significant differences were seen in hospital admissions, length of ED stay, or adverse events between groups.
Bottom line: In patients with stable supraventricular tachycardia, modifying the Valsalva maneuver is significantly more effective in restoring sinus rhythm.
Citation: Appelboam A, Reuben A, Mann C, et al. Postural modification to the standard Valsalva manoeuvre for emergency treatment of supraventricular tachycardias (REVERT): a randomised controlled trial [published online ahead of print August 24, 2015]. Lancet. doi: 10.1016/S0140-6736(15)61485-4.
Clinical question: Does a postural modification to the Valsalva maneuver improve its effectiveness?
Background: The Valsalva maneuver, often used to treat supraventricular tachycardia, is rarely successful. A modification to the maneuver to increase relaxation phase venous return and vagal stimulation could improve its efficacy.
Study design: Multicenter, randomized controlled trial (RCT).
Setting: Ten emergency departments in England.
Synopsis: Four hundred thirty-three patients with stable supraventricular tachycardia (excluding atrial fibrillation or flutter) were randomized to use the Valsalva maneuver (control) or modified Valsalva maneuver (intervention). In the control group, strain was standardized using a manometer (40 mm Hg for 15 seconds). In the intervention group, patients underwent the same maneuver, followed by lying supine with passive leg raise to 45 degrees for 15 seconds. Participants could repeat the maneuver if it was initially unsuccessful. Randomization was stratified by center.
Using an intention-to-treat analysis, 43% of the intervention group achieved the primary outcome of sinus rhythm one minute after straining, compared with 17% of the control group (P<0.0001). The intervention group was less likely to receive adenosine (50% vs. 69%, P=0.0002) or any emergency, anti-arrhythmic treatment (80% vs. 57%, P<0.0001).
No significant differences were seen in hospital admissions, length of ED stay, or adverse events between groups.
Bottom line: In patients with stable supraventricular tachycardia, modifying the Valsalva maneuver is significantly more effective in restoring sinus rhythm.
Citation: Appelboam A, Reuben A, Mann C, et al. Postural modification to the standard Valsalva manoeuvre for emergency treatment of supraventricular tachycardias (REVERT): a randomised controlled trial [published online ahead of print August 24, 2015]. Lancet. doi: 10.1016/S0140-6736(15)61485-4.
Clinical question: Does a postural modification to the Valsalva maneuver improve its effectiveness?
Background: The Valsalva maneuver, often used to treat supraventricular tachycardia, is rarely successful. A modification to the maneuver to increase relaxation phase venous return and vagal stimulation could improve its efficacy.
Study design: Multicenter, randomized controlled trial (RCT).
Setting: Ten emergency departments in England.
Synopsis: Four hundred thirty-three patients with stable supraventricular tachycardia (excluding atrial fibrillation or flutter) were randomized to use the Valsalva maneuver (control) or modified Valsalva maneuver (intervention). In the control group, strain was standardized using a manometer (40 mm Hg for 15 seconds). In the intervention group, patients underwent the same maneuver, followed by lying supine with passive leg raise to 45 degrees for 15 seconds. Participants could repeat the maneuver if it was initially unsuccessful. Randomization was stratified by center.
Using an intention-to-treat analysis, 43% of the intervention group achieved the primary outcome of sinus rhythm one minute after straining, compared with 17% of the control group (P<0.0001). The intervention group was less likely to receive adenosine (50% vs. 69%, P=0.0002) or any emergency, anti-arrhythmic treatment (80% vs. 57%, P<0.0001).
No significant differences were seen in hospital admissions, length of ED stay, or adverse events between groups.
Bottom line: In patients with stable supraventricular tachycardia, modifying the Valsalva maneuver is significantly more effective in restoring sinus rhythm.
Citation: Appelboam A, Reuben A, Mann C, et al. Postural modification to the standard Valsalva manoeuvre for emergency treatment of supraventricular tachycardias (REVERT): a randomised controlled trial [published online ahead of print August 24, 2015]. Lancet. doi: 10.1016/S0140-6736(15)61485-4.
CHA2DS2-Vasc Score Modestly Predicts Stroke, Thromboembolism, Death
Clinical question: For patients with heart failure (HF), with and without concurrent Afib (AF), does the CHA2DS2-VASc score predict ischemic stroke, thromboembolism, and death?
Background: Factors in the CHA2DS2-VASc score predict increased risk of stroke, thromboembolism, and death, regardless of whether AF is present. It is unknown if this score can identify subgroups of patients with HF, with and without AF, at particularly high or low risk of these events.
Study design: Prospective, cohort study.
Setting: Three Danish registries, 2000-2012.
Synopsis: Among 42,987 patients 50 years and older with incident HF not on anticoagulation, the absolute risk of stroke among patients without AF was 1.5% per year or higher with a CHA2DS2-VASc score of two or higher. The absolute risk of stroke was 4% or higher at five years. Risks were higher in the 21.9% of patients with AF. The CHA2DS2-VASc score modestly predicted endpoints and had an approximately 90% negative predictive value for stroke, thromboembolism, and death at one-year follow-up, regardless of whether or not AF was present.
In this large study, HF patients in a non-diverse population were studied, and some patients may have had undiagnosed AF. Functional status and ejection fraction in patients with HF could not be categorized; however, reported five-year results may be generalizable to patients with chronic HF. Select patients with HF without AF, who have two or more factors of the score besides HF, might benefit from anticoagulation.
Bottom line: The CHA2DS2-VASc score modestly predicts stroke, thromboembolism, and death among patients with HF, but further studies are needed to determine its clinical usefulness.
Citation: Melgaard L, Gorst-Rasmussen A, Lane DA, Rasmussen LH, Larsen TB, Lip GY. Assessment of the CHA2DS2-VASc Score in predicting ischemic stroke, thromboembolism, and death in patients with heart failure with and without atrial fibrillation. JAMA. 2015;314(10):1030-1038.
Clinical question: For patients with heart failure (HF), with and without concurrent Afib (AF), does the CHA2DS2-VASc score predict ischemic stroke, thromboembolism, and death?
Background: Factors in the CHA2DS2-VASc score predict increased risk of stroke, thromboembolism, and death, regardless of whether AF is present. It is unknown if this score can identify subgroups of patients with HF, with and without AF, at particularly high or low risk of these events.
Study design: Prospective, cohort study.
Setting: Three Danish registries, 2000-2012.
Synopsis: Among 42,987 patients 50 years and older with incident HF not on anticoagulation, the absolute risk of stroke among patients without AF was 1.5% per year or higher with a CHA2DS2-VASc score of two or higher. The absolute risk of stroke was 4% or higher at five years. Risks were higher in the 21.9% of patients with AF. The CHA2DS2-VASc score modestly predicted endpoints and had an approximately 90% negative predictive value for stroke, thromboembolism, and death at one-year follow-up, regardless of whether or not AF was present.
In this large study, HF patients in a non-diverse population were studied, and some patients may have had undiagnosed AF. Functional status and ejection fraction in patients with HF could not be categorized; however, reported five-year results may be generalizable to patients with chronic HF. Select patients with HF without AF, who have two or more factors of the score besides HF, might benefit from anticoagulation.
Bottom line: The CHA2DS2-VASc score modestly predicts stroke, thromboembolism, and death among patients with HF, but further studies are needed to determine its clinical usefulness.
Citation: Melgaard L, Gorst-Rasmussen A, Lane DA, Rasmussen LH, Larsen TB, Lip GY. Assessment of the CHA2DS2-VASc Score in predicting ischemic stroke, thromboembolism, and death in patients with heart failure with and without atrial fibrillation. JAMA. 2015;314(10):1030-1038.
Clinical question: For patients with heart failure (HF), with and without concurrent Afib (AF), does the CHA2DS2-VASc score predict ischemic stroke, thromboembolism, and death?
Background: Factors in the CHA2DS2-VASc score predict increased risk of stroke, thromboembolism, and death, regardless of whether AF is present. It is unknown if this score can identify subgroups of patients with HF, with and without AF, at particularly high or low risk of these events.
Study design: Prospective, cohort study.
Setting: Three Danish registries, 2000-2012.
Synopsis: Among 42,987 patients 50 years and older with incident HF not on anticoagulation, the absolute risk of stroke among patients without AF was 1.5% per year or higher with a CHA2DS2-VASc score of two or higher. The absolute risk of stroke was 4% or higher at five years. Risks were higher in the 21.9% of patients with AF. The CHA2DS2-VASc score modestly predicted endpoints and had an approximately 90% negative predictive value for stroke, thromboembolism, and death at one-year follow-up, regardless of whether or not AF was present.
In this large study, HF patients in a non-diverse population were studied, and some patients may have had undiagnosed AF. Functional status and ejection fraction in patients with HF could not be categorized; however, reported five-year results may be generalizable to patients with chronic HF. Select patients with HF without AF, who have two or more factors of the score besides HF, might benefit from anticoagulation.
Bottom line: The CHA2DS2-VASc score modestly predicts stroke, thromboembolism, and death among patients with HF, but further studies are needed to determine its clinical usefulness.
Citation: Melgaard L, Gorst-Rasmussen A, Lane DA, Rasmussen LH, Larsen TB, Lip GY. Assessment of the CHA2DS2-VASc Score in predicting ischemic stroke, thromboembolism, and death in patients with heart failure with and without atrial fibrillation. JAMA. 2015;314(10):1030-1038.
Intraoperative Hypotension Predicts Postoperative Mortality
Clinical question: What blood pressure deviations during surgery are predictive of mortality?
Background: Despite the widely assumed importance of blood pressure (BP) management on postoperative outcomes, there are no accepted thresholds requiring intervention.
Study design: Retrospective cohort.
Setting: Six Veterans’ Affairs hospitals, 2001-2008.
Synopsis: Intraoperative BP data from 18,756 patients undergoing major noncardiac surgery were linked with procedure, patient-related risk factors, and 30-day mortality data from the VA Surgical Quality Improvement Program database. Overall 30-day mortality was 1.8%. Using three different methods for defining hyper- or hypotension (based on standard deviations from the mean in this population, absolute thresholds suggested by medical literature, or by changes from baseline BP), no measure of hypertension predicted mortality; however, after adjusting for 10 preoperative patient-related risk factors, extremely low BP for five minutes or more (whether defined as systolic BP <70 mmHg, mean arterial pressure <49 mmHg, or diastolic BP <30 mmHg) was associated with 30-day mortality, with statistically significant odds ratios in the 2.4-3.2 range.
Because this is an observational study, no causal relationship can be established from these data. Low BPs could be markers for sicker patients with increased mortality, despite researchers’ efforts to adjust for known preoperative risks.
Bottom line: Intraoperative hypotension lasting five minutes or more, but not intraoperative hypertension, predicts 30-day mortality.
Citation: Monk TG, Bronsert MR, Henderson WG, et al. Association between intraoperative hypotension and hypertension and 30-day postoperative mortality in noncardiac surgery. Anesthesiology. 2015;123(2):307-319.
Clinical question: What blood pressure deviations during surgery are predictive of mortality?
Background: Despite the widely assumed importance of blood pressure (BP) management on postoperative outcomes, there are no accepted thresholds requiring intervention.
Study design: Retrospective cohort.
Setting: Six Veterans’ Affairs hospitals, 2001-2008.
Synopsis: Intraoperative BP data from 18,756 patients undergoing major noncardiac surgery were linked with procedure, patient-related risk factors, and 30-day mortality data from the VA Surgical Quality Improvement Program database. Overall 30-day mortality was 1.8%. Using three different methods for defining hyper- or hypotension (based on standard deviations from the mean in this population, absolute thresholds suggested by medical literature, or by changes from baseline BP), no measure of hypertension predicted mortality; however, after adjusting for 10 preoperative patient-related risk factors, extremely low BP for five minutes or more (whether defined as systolic BP <70 mmHg, mean arterial pressure <49 mmHg, or diastolic BP <30 mmHg) was associated with 30-day mortality, with statistically significant odds ratios in the 2.4-3.2 range.
Because this is an observational study, no causal relationship can be established from these data. Low BPs could be markers for sicker patients with increased mortality, despite researchers’ efforts to adjust for known preoperative risks.
Bottom line: Intraoperative hypotension lasting five minutes or more, but not intraoperative hypertension, predicts 30-day mortality.
Citation: Monk TG, Bronsert MR, Henderson WG, et al. Association between intraoperative hypotension and hypertension and 30-day postoperative mortality in noncardiac surgery. Anesthesiology. 2015;123(2):307-319.
Clinical question: What blood pressure deviations during surgery are predictive of mortality?
Background: Despite the widely assumed importance of blood pressure (BP) management on postoperative outcomes, there are no accepted thresholds requiring intervention.
Study design: Retrospective cohort.
Setting: Six Veterans’ Affairs hospitals, 2001-2008.
Synopsis: Intraoperative BP data from 18,756 patients undergoing major noncardiac surgery were linked with procedure, patient-related risk factors, and 30-day mortality data from the VA Surgical Quality Improvement Program database. Overall 30-day mortality was 1.8%. Using three different methods for defining hyper- or hypotension (based on standard deviations from the mean in this population, absolute thresholds suggested by medical literature, or by changes from baseline BP), no measure of hypertension predicted mortality; however, after adjusting for 10 preoperative patient-related risk factors, extremely low BP for five minutes or more (whether defined as systolic BP <70 mmHg, mean arterial pressure <49 mmHg, or diastolic BP <30 mmHg) was associated with 30-day mortality, with statistically significant odds ratios in the 2.4-3.2 range.
Because this is an observational study, no causal relationship can be established from these data. Low BPs could be markers for sicker patients with increased mortality, despite researchers’ efforts to adjust for known preoperative risks.
Bottom line: Intraoperative hypotension lasting five minutes or more, but not intraoperative hypertension, predicts 30-day mortality.
Citation: Monk TG, Bronsert MR, Henderson WG, et al. Association between intraoperative hypotension and hypertension and 30-day postoperative mortality in noncardiac surgery. Anesthesiology. 2015;123(2):307-319.
Caprini Risk Assessment Tool Can Distinguish High Risk of VTE in Critically Ill Surgical Patients
Clinical question: Is the Caprini Risk Assessment Model for VTE risk valid in critically ill surgical patients?
Background: Critically ill surgical patients are at increased risk of developing VTE. Chemoprophylaxis decreases VTE risk, but benefits must be balanced against bleeding risk. Rapid and accurate risk stratification supports decisions about prophylaxis; however, data regarding appropriate risk stratification are limited.
Study design: Retrospective, cohort study.
Setting: Surgical ICU (SICU) at a single, U.S. academic medical center, 2007-2013.
Synopsis: Among 4,844 consecutive admissions, the in-hospital VTE rate was 7.5% (364). Using a previously validated, computer-generated, retrospective risk score based on the 2005 Caprini model, patients were most commonly at moderate risk for VTE upon ICU admission (32%). Fifteen percent (723) were extremely high risk. VTE incidence increased linearly with increasing Caprini scores. Data were abstracted from multiple electronic sources.
Younger age, recent sepsis or pneumonia, central venous access on ICU admission, personal VTE history, and operative procedure were significantly associated with inpatient VTE events. The proportion of patients who received chemoprophylaxis postoperatively was similar regardless of VTE risk. Patients at higher risk were more likely to receive chemoprophylaxis preoperatively.
Results from this retrospective, single-center study suggest that Caprini is a valid tool to predict inpatient VTE risk in this population. Inclusion of multiple risk factors may make calculation of this score prohibitive in other settings unless it can be computer generated.
Bottom line: Caprini risk scores accurately distinguish critically ill surgical patients at high risk of VTE from those at lower risk.
Citation: Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150(10):941-948. doi:10.1001/jamasurg.2015.1841.
Clinical question: Is the Caprini Risk Assessment Model for VTE risk valid in critically ill surgical patients?
Background: Critically ill surgical patients are at increased risk of developing VTE. Chemoprophylaxis decreases VTE risk, but benefits must be balanced against bleeding risk. Rapid and accurate risk stratification supports decisions about prophylaxis; however, data regarding appropriate risk stratification are limited.
Study design: Retrospective, cohort study.
Setting: Surgical ICU (SICU) at a single, U.S. academic medical center, 2007-2013.
Synopsis: Among 4,844 consecutive admissions, the in-hospital VTE rate was 7.5% (364). Using a previously validated, computer-generated, retrospective risk score based on the 2005 Caprini model, patients were most commonly at moderate risk for VTE upon ICU admission (32%). Fifteen percent (723) were extremely high risk. VTE incidence increased linearly with increasing Caprini scores. Data were abstracted from multiple electronic sources.
Younger age, recent sepsis or pneumonia, central venous access on ICU admission, personal VTE history, and operative procedure were significantly associated with inpatient VTE events. The proportion of patients who received chemoprophylaxis postoperatively was similar regardless of VTE risk. Patients at higher risk were more likely to receive chemoprophylaxis preoperatively.
Results from this retrospective, single-center study suggest that Caprini is a valid tool to predict inpatient VTE risk in this population. Inclusion of multiple risk factors may make calculation of this score prohibitive in other settings unless it can be computer generated.
Bottom line: Caprini risk scores accurately distinguish critically ill surgical patients at high risk of VTE from those at lower risk.
Citation: Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150(10):941-948. doi:10.1001/jamasurg.2015.1841.
Clinical question: Is the Caprini Risk Assessment Model for VTE risk valid in critically ill surgical patients?
Background: Critically ill surgical patients are at increased risk of developing VTE. Chemoprophylaxis decreases VTE risk, but benefits must be balanced against bleeding risk. Rapid and accurate risk stratification supports decisions about prophylaxis; however, data regarding appropriate risk stratification are limited.
Study design: Retrospective, cohort study.
Setting: Surgical ICU (SICU) at a single, U.S. academic medical center, 2007-2013.
Synopsis: Among 4,844 consecutive admissions, the in-hospital VTE rate was 7.5% (364). Using a previously validated, computer-generated, retrospective risk score based on the 2005 Caprini model, patients were most commonly at moderate risk for VTE upon ICU admission (32%). Fifteen percent (723) were extremely high risk. VTE incidence increased linearly with increasing Caprini scores. Data were abstracted from multiple electronic sources.
Younger age, recent sepsis or pneumonia, central venous access on ICU admission, personal VTE history, and operative procedure were significantly associated with inpatient VTE events. The proportion of patients who received chemoprophylaxis postoperatively was similar regardless of VTE risk. Patients at higher risk were more likely to receive chemoprophylaxis preoperatively.
Results from this retrospective, single-center study suggest that Caprini is a valid tool to predict inpatient VTE risk in this population. Inclusion of multiple risk factors may make calculation of this score prohibitive in other settings unless it can be computer generated.
Bottom line: Caprini risk scores accurately distinguish critically ill surgical patients at high risk of VTE from those at lower risk.
Citation: Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150(10):941-948. doi:10.1001/jamasurg.2015.1841.