CPR Prior to Defibrillation for VF/VT CPA

Article Type
Changed
Mon, 05/15/2017 - 22:32
Display Headline
A focused investigation of expedited, stack of three shocks versus chest compressions first followed by single shocks for monitored ventricular fibrillation/ventricular tachycardia cardiopulmonary arrest in an in‐hospital setting

Cardiopulmonary arrest (CPA) is a major contributor to overall mortality in both the in‐ and out‐of‐hospital setting.[1, 2, 3] Despite advances in the field of resuscitation science, mortality from CPA remains high.[1, 4] Unlike the out‐of‐hospital environment, inpatient CPA is unique, as trained healthcare providers are the primary responders with a range of expertise available throughout the duration of arrest.

There are inherent opportunities of in‐hospital cardiac arrest that exist, such as the opportunity for near immediate arrest detection, rapid initiation of high‐quality chest compressions, and early defibrillation if indicated. Given the association between improved rates of successful defibrillation and high‐quality chest compressions, the 2005 American Heart Association (AHA) updates changed the recommended guideline ventricular fibrillation/ventricular tachycardia (VF/VT) defibrillation sequence from 3 stacked shocks to a single shock followed by 2 minutes of chest compressions between defibrillation attempts.[5, 6] However, the recommendations were directed primarily at cases of out‐of‐hospital VF/VT CPA, and it currently remains unclear as to whether this strategy offers any advantage to patients who suffer an in‐hospital VF/VT arrest.[7]

Despite the aforementioned findings regarding the benefit of high‐quality chest compressions, there is a paucity of evidence in the medical literature to support whether delivering a period of chest compressions before defibrillation attempt, including initial shock and shock sequence, translate to improved outcomes. With the exception of the statement recommending early defibrillation in case of in‐hospital arrest, there are no formal AHA consensus recommendations.[5, 8, 9] Here we document our experience using the approach of expedited stacked defibrillation shocks in persons experiencing monitored in‐hospital VF/VT arrest.

METHODS

Design

This was a retrospective study of observational data from our in‐hospital resuscitation database. Waiver of informed consent was granted by our institutional investigational review board.

Setting

This study was performed in the University of California San Diego Healthcare System, which includes 2 urban academic hospitals, with a combined total of approximately 500 beds. A designated team is activated in response to code blue requests and includes: code registered nurse (RN), code doctor of medicine (MD), airway MD, respiratory therapist, pharmacist, house nursing supervisor, primary RN, and unit charge RN. Crash carts with defibrillators (ZOLL R and E series; ZOLL Medical Corp., Chelmsford, MA) are located on each inpatient unit. Defibrillator features include real‐time cardiopulmonary resuscitation (CPR) feedback, filtered electrocardiography (ECG), and continuous waveform capnography.

Resuscitation training is provided for all hospital providers as part of the novel Advanced Resuscitation Training (ART) program, which was initiated in 2007.[10] Critical care nurses and physicians receive annual training, whereas noncritical care personnel undergo biennial training. The curriculum is adaptable to institutional treatment algorithms, equipment, and code response. Content is adaptive based on provider type, unit, and opportunities for improvement as revealed by performance improvement data. Resuscitation treatment algorithms are reviewed annually by the Critical Care Committee and Code Blue Subcommittee as part of the ART program, with modifications incorporated into the institutional policies and procedures.

Subjects

All admitted patients with continuous cardiac monitoring who suffered VF/VT arrest between July 2005 and June 2013 were included in this analysis. Patients with active do not attempt resuscitation orders were excluded. Patients were identified from our institutional resuscitation database, into which all in‐hospital cardiopulmonary arrest data are entered. We did not have data on individual patient comorbidity or severity of illness. Overall patient acuity over the course of the study was monitored hospital wide through case‐mix index (CMI). The index is based upon the allocation of hospital resources used to treat a diagnosis‐related group of patients and has previously been used as a surrogate for patient acuity.[11, 12, 13] The code RN who performed the resuscitation is responsible for entering data into a protected performance improvement database. Telecommunications records and the unit log are cross‐referenced to assure complete capture.

Protocols

Specific protocol similarities and differences among the 3 study periods are presented in Table 1.

Institutional In‐hospital Cardiopulmonary Arrest Protocol Variables During the Study Period
Protocol VariableStack Shock Period (20052008)Initial Chest Compression Period (20082011)Modified Stack Shock Period (20112013)
  • NOTE: Abbreviations: IO, intraosseous; IV, intravenous; VF, ventricular fibrillation; VT, ventricular tachycardia. *Only if monitored or witnessed at time of arrest.

Defibrillator typeMedtronic/Physio Control LifePak 12Zoll E SeriesZoll E Series
Joule increment with defibrillation200J‐300J‐360J, manual escalation120J‐150J‐200J, manual escalation120J‐150J‐200J, automatic escalation
Distinction between monitored and unmonitored in‐hospital cardiopulmonary arrestNoYesYes
Chest compressions prior to initial defibrillationNoYesNo*
Initial defibrillation strategy3 expedited stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT2 minutes of chest compressions prior to initial and in between attempts3 expedited stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT*
Chest compression to ventilation ratio15:1Continuous chest compressions with ventilation at ratio 10:1Continuous chest compressions with ventilation at ratio 10:1
VasopressorsEpinephrine 1 mg IV/IO every 35 minutes.Epinephrine 1 mg IV/IO or vasopressin 40 units IV/IO every 35 minutesEpinephrine 1 mg IV/IO or vasopressin 40 units IV/IO every 35 minutes.

Stacked Shock Period (20052008)

Historically, our institutional cardiopulmonary arrest protocols advocated early defibrillation with administration of 3 stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT before initiating/resuming chest compressions.

Initial Chest Compression Period (20082011)

In 2008 the protocol was modified to reflect recommendations to perform a 2‐minute period of chest compressions prior to each defibrillation, including the initial attempt.

Modified Stacked Shack Period (20112013)

Finally, in 2011 the protocol was modified again, and defibrillators were configured to allow automatic advancement of defibrillation energy (120J‐150J‐200J). The defibrillation protocol included the following elements.

For an unmonitored arrest, chest compressions and ventilations should be initiated upon recognition of cardiopulmonary arrest. If VF/VT was identified upon placement of defibrillator pads, immediate counter shock was performed and chest compressions resumed immediately for a period of 2 minutes before considering a repeat defibrillation attempt. A dose of epinephrine (1 mg intravenous [IV]/emntraosseous [IO]) or vasopressin (40 units IV/IO) was administered as close to the reinitiation of chest compressions as possible. Defibrillation attempts proceeded with a single shock at a time, each preceded by 2 minutes of chest compressions.

For a monitored arrest, defibrillation attempts were expedited. Chest compressions without ventilations were initiated only until defibrillator pads were placed. Defibrillation attempts were initiated as soon as possible, with at least 3 or more successive shocks administered for persistent VF/VT (stacked shocks). Compressions were performed between shocks if they did not interfere with rhythm analysis. Compressions resumed following the initial series of stacked shocks with persistent CPA, regardless of rhythm, and pressors administered (epinephrine 1 mg IV or vasopressin 40 units IV). Persistent VF/VT received defibrillation attempts every 2 minutes following the initial series of stacked shocks, with compressions performed continuously between attempts. Persistent VF/VT should trigger emergent cardiology consultation for possible emergent percutaneous intervention.

Analysis

The primary outcome measure was defined as survival to hospital discharge at baseline and following each protocol change. 2 was used to compare the 3 time periods, with P < 0.05 defined as statistically significant. Specific group comparisons were made with Bonferroni correction, with P < 0.017 defined as statistically significant. Secondary outcome measures included return of spontaneous circulation (ROSC) and number of shocks required. Demographic and clinical data were also presented for each of the 3 study periods.

RESULTS

A total of 661 cardiopulmonary arrests of all rhythms were identified during the entire study period. Primary VF/VT arrests was identified in 106 patients (16%). Of these, 102 (96%) were being monitored with continuous ECG at the time of arrest. Demographic and clinical information for the entire study cohort are displayed in Table 2. There were no differences in age, gender, time of arrest, and location of arrest between study periods (all P > 0.05). The incidence of VF/VT arrest did not vary significantly between the study periods (P = 0.16). There were no differences in mean number of defibrillation attempts per arrest; however, there was a significant improvement in the rate of perfusing rhythm after initial set of defibrillation attempts and overall ROSC favoring stacked shocks (all P < 0.05, Table 2). Survival‐to‐hospital discharge for all VF/VT arrest victims decreased, then increased significantly from the stacked shock period to initial chest compression period to modified stacked shock period (58%, 18%, 71%, respectively, P < 0.01, Figure 1). After Bonferroni correction, specific group differences were significant between the stacked shock and initial chest compression groups (P < 0.01) and modified stacked shocks and initial chest compression groups (P < 0.01, Table 2). Finally, the incidence of bystander CPR appeared to be significantly greater in the modified stacked shock period following implementation of our resuscitation program (Table 2). Overall hospital CMI for fiscal years 2005/2006 through 2012/2013 were significantly different (1.47 vs 1.71, P < 0.0001).

Demographic and Clinical Data for Study Population
ParameterStacked Shocks (n = 31)Initial Chest Compressions (n = 33)Modified Stack Shocks (n = 42)
  • NOTE: Abbreviations: CPR, cardiopulmonary resuscitation; ICC, initial chest compressions; ICU, intensive care unit; MSS, modified stack shocks; PVT, pulseless ventricular tachycardia; ROSC, return of spontaneous circulation; SS, stacked shocks; VF, ventricular fibrillation. *P < 0.001 versus periods SS and MSS. P < 0.05 versus periods SS and ICC. P < 0.05 versus period MSS. P < 0.01 versus periods SS and MSS. P < 0.001 versus periods SS and ICC.

Age (y)54.364.359.8
Male gender (%)16 (52)21 (64)21 (50)
VF/PVT arrest incidence (per 1,000 admissions)0.49 0.70
Arrest 7 am5 pm (%)15 (48)17 (52)21 (50)
Non‐ICU location (%)13 (42)15 (45)17 (40)
CPR prior to code team arrival (%)22 (71)*31 (94)42 (100)
Perfusing rhythm after initial set of defibrillation attempts (%)373370
Mean defibrillation attempts (no.)1.31.81.5
ROSC (%)765690
Survival‐to‐hospital discharge (%)18 (58)6 (18)30 (71)
Case‐mix index (average coefficient by period)1.511.601.69
Figure 1
Survival to discharge for patients with ventricular fibrillation/ventricular tachycardia arrest from 2005 to 2013. Survival was significantly lower during the initial chest compression (ICC) period as compared to stacked shocks (SS) and modified stacked shock (MSS) periods (P < 0.01).

DISCUSSION

The specific focus of this observation was to report on defibrillation strategies that have previously only been reported in an out‐of‐hospital setting. There is no current consensus regarding chest compressions for a predetermined amount of time prior to defibrillation in an inpatient setting. Here we present data suggesting improved outcomes using an approach that expedited defibrillation and included a defibrillation strategy of stacked shocks (stacked shock and modified stack shock, respectively) in monitored inpatient VF/VT arrest.

Early out‐of‐hospital studies initially demonstrated a significant survival benefit for patients who received 1.5 to 3 minutes of chest compressions preceding defibrillation with reported arrest downtimes of 4 to 5 minutes prior to emergency medical services arrival.[14, 15] However, in more recent randomized controlled trials, outcome was not improved when chest compressions were performed prior to defibrillation attempt.[16, 17] Our findings suggest that there is no one size fits all approach to chest compression and defibrillation strategy. Instead, we suggest that factors including whether the arrest occurred while monitored or not aid with decision making and timing of defibrillation.

Our findings favoring expedited defibrillation and stacked shocks in witnessed arrest are consistent with the 3‐phase model of cardiac arrest proposed by Weisfeldt and Becker suggesting that defibrillation success is related to the energy status of the heart.[18] In this model, the first 4 minutes of VF arrest (electrical phase) are characterized by a high‐energy state with higher adenosine triphosphate (ATP)/adenosine monophosphate (AMP) ratios that are associated with increased likelihood for ROSC after defibrillation attempt.[19] Further, VF appears to deplete ATP/AMP ratios after about 4 minutes, at which point the likelihood of defibrillation success is substantially diminished.[18] Between 4 and 10 minutes (circulatory phase), energy stores in the myocardium are severely depleted. However, there is evidence to suggest that high‐quality chest compressions and high chest compression fractionparticularly in conjunction with epinephrinecan replenish ATP stores and increase the likelihood of defibrillation success.[6, 20] Beyond 10 minutes (metabolic phase), survival rates are abysmal, with no therapy yet identified producing clinical utility.

The secondary analyses reveal several interesting trends. We anticipated a higher number of defibrillation attempts during phase II due to a lower likelihood of conversion with a CPR‐first approach. Instead, the number of shocks was similar across all 3 periods. Our findings are consistent with previous reports of a low single or first shock probability of successful defibrillation. However, recent reports document that approximately 80% of patients who ultimately survive to discharge are successfully defibrillated within the first 3 shocks.[21, 22, 23]

It appears that the likelihood of conversion to a perfusing rhythm is higher with expedited, stacked shocks. This underscores the importance of identifying an optimal approach to the treatment of VF/VT, as the initial series of defibrillation attempts may determine outcomes. There also appeared to be an increase in the incidence of VF/VT during the modified stack shock period, although this was not statistically significant. The modified stack shock period correlated temporally with the expansion of our institution's cardiovascular service and the opening of a dedicated inpatient facility, which likely influenced our mixture of inpatients.

These data should be interpreted with consideration of study limitations. Primarily, we did not attempt to determine arrest times prior to initial defibrillation attempts, which is likely an important variable. However, we limited our population studied only to individuals experiencing VF/VT arrest that was witnessed by hospital care staff or occurred while on cardiac monitor. We are confident that these selective criteria resulted in expedited identification and response times well within the electrical phase. We did not evaluate differences or changes in individual patient‐level severity of illness that may have potentially confounded outcome analysis. The effect of individual level in severity of illness and comorbidity are not known. Instead, we used CMI coefficients to explore hospital wide changes in patient acuity during the study period. We noticed an increasing case‐mix coefficient value suggesting higher patient acuity, which would predict increased mortality rather than the decrease noted between the initial chest compression and modified stacked shock periods (Table 2). In addition, we did not integrate CPR process variables, such as depth, rate, recoil, chest compression fraction, and per‐shock pauses, into this analysis. Our previous studies indicated that high‐quality CPR may account for a significant amount of improvement in outcomes following our novel resuscitation program implementation in 2007.[10, 24] Since the program's inception, we have reported continuous improvement in overall in‐hospital mortality that was sustained throughout the duration of the study period despite the significant changes reported in the 3 periods with monitored VF/VT arrest.[10] The use of medications prior to initial defibrillation attempts was not recorded. We have recently reported that during the same period of data collection, there were no significant changes in the use of epinephrine; however, there was a significant increase in the use of vasopressin.[10] It is unclear whether the increased use of vasopressin contributed to the current outcomes. However, given our cohort of witnessed in‐hospital cardiac arrests with an initial shockable rhythm, we anticipate the use of vasopressors as unlikely prior to defibrillation attempt.

Additional important limitations and potential confounding factors in this study were the use of 2 different types of defibrillators, differing escalating energy strategies, and differing defibrillator waveforms. Recent evidence supports biphasic waveforms as more effective than monophasic waveforms.[25, 26, 27] Comparison of defibrillator brand and waveform superiority is out the scope of this study; however, it is interesting to note similar high rates of survival in the stacked shock and modified stack shock phases despite use of different defibrillator brands and waveforms during those respective phases. Regarding escalating energy of defibrillation countershocks, the most recent 2010 AHA guidelines have no position on the superiority of either manual or automatic escalation.[7] However, we noted similar high rates of survival in the stacked shock and modified stack shock periods despite use of differing escalating strategies. Finally, we used survival‐to‐hospital discharge as our main outcome measure rather than neurological status. However, prior studies from our institution suggest that most VF/VT survivors have good neurological outcomes, which are influenced heavily by preadmission functional status.[24]

CONCLUSIONS

Our data suggest that in cases of monitored VF/VT arrest, expeditious defibrillation with use of stacked shocks is associated with a higher rate of ROSC and survival to hospital discharge

Disclosure: Nothing to report.

Files
References
  1. Morrison LJ, Neumar RW, Zimmerman JL, et al. Strategies for improving survival after in‐hospital cardiac arrest in the United States: 2013 consensus recommendations: a consensus statement from the American Heart Association. Circulation. 2013;127:15381563.
  2. Peberdy MA, Ornato JP, Larkin GL, et al. Survival from in‐hospital cardiac arrest during nights and weekends. JAMA. 2008;299:785792.
  3. Rosamond W, Flegal K, Furie K, et al. Heart disease and stroke statistics—2008 update: a report from the American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Circulation. 2008;117:e25e146.
  4. Sasson C, Rogers MA, Dahl J, Kellermann AL. Predictors of survival from out‐of‐hospital cardiac arrest: a systematic review and meta‐analysis. Circ Cardiovasc Qual Outcomes. 2010;3:6381.
  5. Abella BS, Alvarado JP, Myklebust H, et al. Quality of cardiopulmonary resuscitation during in‐hospital cardiac arrest. JAMA. 2005;293:305310.
  6. Christenson J, Andrusiek D, Everson‐Stewart S, et al. Chest compression fraction determines survival in patients with out‐of‐hospital ventricular fibrillation. Circulation. 2009;120:12411247.
  7. Jacobs I, Sunde K, Deakin CD, et al. Part 6: Defibrillation: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Circulation. 2010;122:S325S337.
  8. Field JM, Hazinski MF, Sayre MR, et al. Part 1: executive summary: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2010;122:S640S656.
  9. Neumar RW, Otto CW, Link MS, et al. Part 8: adult advanced cardiovascular life support: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2010;122:S729S767.
  10. Davis DP, Graham PG, Husa RD, et al. A performance improvement‐based resuscitation programme reduces arrest incidence and increases survival from in‐hospital cardiac arrest. Resuscitation. 2015;92:6369.
  11. Averill RF. The evolution of case‐mix measurement using DRGs: past, present and future. Stud Health Technol Inform. 1994;14:7583.
  12. Merchant RM, Yang L, Becker LB, et al. Variability in case‐mix adjusted in‐hospital cardiac arrest rates. Med Care. 2012;50:124130.
  13. Timbie JW, Hussey PS, Adams JL, Ruder TW, Mehrotra A. Impact of socioeconomic adjustment on physicians' relative cost of care. Med Care. 2013;51:454460.
  14. Cobb LA, Fahrenbruch CE, Walsh TR, et al. Influence of cardiopulmonary resuscitation prior to defibrillation in patients with out‐of‐hospital ventricular fibrillation. JAMA. 1999;281:11821188.
  15. Wik L, Hansen TB, Fylling F, et al. Delaying defibrillation to give basic cardiopulmonary resuscitation to patients with out‐of‐hospital ventricular fibrillation: a randomized trial. JAMA. 2003;289:13891395.
  16. Baker PW, Conway J, Cotton C, et al. Defibrillation or cardiopulmonary resuscitation first for patients with out‐of‐hospital cardiac arrests found by paramedics to be in ventricular fibrillation? A randomised control trial. Resuscitation. 2008;79:424431.
  17. Jacobs IG, Finn JC, Oxer HF, Jelinek GA. CPR before defibrillation in out‐of‐hospital cardiac arrest: a randomized trial. Emerg Med Australas. 2005;17:3945.
  18. Weisfeldt ML, Becker LB. Resuscitation after cardiac arrest: a 3‐phase time‐sensitive model. JAMA. 2002;288:30353038.
  19. Salcido DD, Menegazzi JJ, Suffoletto BP, Logue ES, Sherman LD. Association of intramyocardial high energy phosphate concentrations with quantitative measures of the ventricular fibrillation electrocardiogram waveform. Resuscitation. 2009;80:946950.
  20. Holzer M, Behringer W, Sterz F, et al. Ventricular fibrillation median frequency may not be useful for monitoring during cardiac arrest treated with endothelin‐1 or epinephrine. Anesth Analg. 2004;99:17871793, table of contents.
  21. Eftestøl T, Sunde K, Aase SO, Husøy JH, Steen PA. “Probability of successful defibrillation” as a monitor during CPR in out‐of‐hospital cardiac arrested patients. Resuscitation. 2015;48:245254.
  22. Rodriguez‐Nunez A, Lopez‐Herce J, Castillo J, Bellon JM. Shockable rhythms and defibrillation during in‐hospital pediatric cardiac arrest. Resuscitation. 2014;85:387391.
  23. Tomkins WG, Swain AH, Bailey M, Larsen PD. Beyond the pre‐shock pause: the effect of prehospital defibrillation mode on CPR interruptions and return of spontaneous circulation. Resuscitation. 2013;84:575579.
  24. Sell RE, Lawrence B, Davis DP. Implementing a “resuscitation bundle” decreases incidence and improves outcomes in inpatient cardiopulmonary arrest. Circulation 2009;120(18 Suppl):S1441.
  25. Schneider T, Martens PR, Paschen H, et al. Multicenter, randomized, controlled trial of 150‐J biphasic shocks compared with 200‐ to 360‐J monophasic shocks in the resuscitation of out‐of‐hospital cardiac arrest victims. Optimized Response to Cardiac Arrest (ORCA) Investigators. Circulation. 2000;102:17801787.
  26. Alem AP, Chapman FW, Lank P, Hart AA, Koster RW. A prospective, randomised and blinded comparison of first shock success of monophasic and biphasic waveforms in out‐of‐hospital cardiac arrest. Resuscitation. 2003;58:1724.
  27. Morrison LJ, Dorian P, Long J, et al. Out‐of‐hospital cardiac arrest rectilinear biphasic to monophasic damped sine defibrillation waveforms with advanced life support intervention trial (ORBIT). Resuscitation. 2005;66:149157.
Article PDF
Issue
Journal of Hospital Medicine - 11(4)
Page Number
264-268
Sections
Files
Files
Article PDF
Article PDF

Cardiopulmonary arrest (CPA) is a major contributor to overall mortality in both the in‐ and out‐of‐hospital setting.[1, 2, 3] Despite advances in the field of resuscitation science, mortality from CPA remains high.[1, 4] Unlike the out‐of‐hospital environment, inpatient CPA is unique, as trained healthcare providers are the primary responders with a range of expertise available throughout the duration of arrest.

There are inherent opportunities of in‐hospital cardiac arrest that exist, such as the opportunity for near immediate arrest detection, rapid initiation of high‐quality chest compressions, and early defibrillation if indicated. Given the association between improved rates of successful defibrillation and high‐quality chest compressions, the 2005 American Heart Association (AHA) updates changed the recommended guideline ventricular fibrillation/ventricular tachycardia (VF/VT) defibrillation sequence from 3 stacked shocks to a single shock followed by 2 minutes of chest compressions between defibrillation attempts.[5, 6] However, the recommendations were directed primarily at cases of out‐of‐hospital VF/VT CPA, and it currently remains unclear as to whether this strategy offers any advantage to patients who suffer an in‐hospital VF/VT arrest.[7]

Despite the aforementioned findings regarding the benefit of high‐quality chest compressions, there is a paucity of evidence in the medical literature to support whether delivering a period of chest compressions before defibrillation attempt, including initial shock and shock sequence, translate to improved outcomes. With the exception of the statement recommending early defibrillation in case of in‐hospital arrest, there are no formal AHA consensus recommendations.[5, 8, 9] Here we document our experience using the approach of expedited stacked defibrillation shocks in persons experiencing monitored in‐hospital VF/VT arrest.

METHODS

Design

This was a retrospective study of observational data from our in‐hospital resuscitation database. Waiver of informed consent was granted by our institutional investigational review board.

Setting

This study was performed in the University of California San Diego Healthcare System, which includes 2 urban academic hospitals, with a combined total of approximately 500 beds. A designated team is activated in response to code blue requests and includes: code registered nurse (RN), code doctor of medicine (MD), airway MD, respiratory therapist, pharmacist, house nursing supervisor, primary RN, and unit charge RN. Crash carts with defibrillators (ZOLL R and E series; ZOLL Medical Corp., Chelmsford, MA) are located on each inpatient unit. Defibrillator features include real‐time cardiopulmonary resuscitation (CPR) feedback, filtered electrocardiography (ECG), and continuous waveform capnography.

Resuscitation training is provided for all hospital providers as part of the novel Advanced Resuscitation Training (ART) program, which was initiated in 2007.[10] Critical care nurses and physicians receive annual training, whereas noncritical care personnel undergo biennial training. The curriculum is adaptable to institutional treatment algorithms, equipment, and code response. Content is adaptive based on provider type, unit, and opportunities for improvement as revealed by performance improvement data. Resuscitation treatment algorithms are reviewed annually by the Critical Care Committee and Code Blue Subcommittee as part of the ART program, with modifications incorporated into the institutional policies and procedures.

Subjects

All admitted patients with continuous cardiac monitoring who suffered VF/VT arrest between July 2005 and June 2013 were included in this analysis. Patients with active do not attempt resuscitation orders were excluded. Patients were identified from our institutional resuscitation database, into which all in‐hospital cardiopulmonary arrest data are entered. We did not have data on individual patient comorbidity or severity of illness. Overall patient acuity over the course of the study was monitored hospital wide through case‐mix index (CMI). The index is based upon the allocation of hospital resources used to treat a diagnosis‐related group of patients and has previously been used as a surrogate for patient acuity.[11, 12, 13] The code RN who performed the resuscitation is responsible for entering data into a protected performance improvement database. Telecommunications records and the unit log are cross‐referenced to assure complete capture.

Protocols

Specific protocol similarities and differences among the 3 study periods are presented in Table 1.

Institutional In‐hospital Cardiopulmonary Arrest Protocol Variables During the Study Period
Protocol VariableStack Shock Period (20052008)Initial Chest Compression Period (20082011)Modified Stack Shock Period (20112013)
  • NOTE: Abbreviations: IO, intraosseous; IV, intravenous; VF, ventricular fibrillation; VT, ventricular tachycardia. *Only if monitored or witnessed at time of arrest.

Defibrillator typeMedtronic/Physio Control LifePak 12Zoll E SeriesZoll E Series
Joule increment with defibrillation200J‐300J‐360J, manual escalation120J‐150J‐200J, manual escalation120J‐150J‐200J, automatic escalation
Distinction between monitored and unmonitored in‐hospital cardiopulmonary arrestNoYesYes
Chest compressions prior to initial defibrillationNoYesNo*
Initial defibrillation strategy3 expedited stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT2 minutes of chest compressions prior to initial and in between attempts3 expedited stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT*
Chest compression to ventilation ratio15:1Continuous chest compressions with ventilation at ratio 10:1Continuous chest compressions with ventilation at ratio 10:1
VasopressorsEpinephrine 1 mg IV/IO every 35 minutes.Epinephrine 1 mg IV/IO or vasopressin 40 units IV/IO every 35 minutesEpinephrine 1 mg IV/IO or vasopressin 40 units IV/IO every 35 minutes.

Stacked Shock Period (20052008)

Historically, our institutional cardiopulmonary arrest protocols advocated early defibrillation with administration of 3 stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT before initiating/resuming chest compressions.

Initial Chest Compression Period (20082011)

In 2008 the protocol was modified to reflect recommendations to perform a 2‐minute period of chest compressions prior to each defibrillation, including the initial attempt.

Modified Stacked Shack Period (20112013)

Finally, in 2011 the protocol was modified again, and defibrillators were configured to allow automatic advancement of defibrillation energy (120J‐150J‐200J). The defibrillation protocol included the following elements.

For an unmonitored arrest, chest compressions and ventilations should be initiated upon recognition of cardiopulmonary arrest. If VF/VT was identified upon placement of defibrillator pads, immediate counter shock was performed and chest compressions resumed immediately for a period of 2 minutes before considering a repeat defibrillation attempt. A dose of epinephrine (1 mg intravenous [IV]/emntraosseous [IO]) or vasopressin (40 units IV/IO) was administered as close to the reinitiation of chest compressions as possible. Defibrillation attempts proceeded with a single shock at a time, each preceded by 2 minutes of chest compressions.

For a monitored arrest, defibrillation attempts were expedited. Chest compressions without ventilations were initiated only until defibrillator pads were placed. Defibrillation attempts were initiated as soon as possible, with at least 3 or more successive shocks administered for persistent VF/VT (stacked shocks). Compressions were performed between shocks if they did not interfere with rhythm analysis. Compressions resumed following the initial series of stacked shocks with persistent CPA, regardless of rhythm, and pressors administered (epinephrine 1 mg IV or vasopressin 40 units IV). Persistent VF/VT received defibrillation attempts every 2 minutes following the initial series of stacked shocks, with compressions performed continuously between attempts. Persistent VF/VT should trigger emergent cardiology consultation for possible emergent percutaneous intervention.

Analysis

The primary outcome measure was defined as survival to hospital discharge at baseline and following each protocol change. 2 was used to compare the 3 time periods, with P < 0.05 defined as statistically significant. Specific group comparisons were made with Bonferroni correction, with P < 0.017 defined as statistically significant. Secondary outcome measures included return of spontaneous circulation (ROSC) and number of shocks required. Demographic and clinical data were also presented for each of the 3 study periods.

RESULTS

A total of 661 cardiopulmonary arrests of all rhythms were identified during the entire study period. Primary VF/VT arrests was identified in 106 patients (16%). Of these, 102 (96%) were being monitored with continuous ECG at the time of arrest. Demographic and clinical information for the entire study cohort are displayed in Table 2. There were no differences in age, gender, time of arrest, and location of arrest between study periods (all P > 0.05). The incidence of VF/VT arrest did not vary significantly between the study periods (P = 0.16). There were no differences in mean number of defibrillation attempts per arrest; however, there was a significant improvement in the rate of perfusing rhythm after initial set of defibrillation attempts and overall ROSC favoring stacked shocks (all P < 0.05, Table 2). Survival‐to‐hospital discharge for all VF/VT arrest victims decreased, then increased significantly from the stacked shock period to initial chest compression period to modified stacked shock period (58%, 18%, 71%, respectively, P < 0.01, Figure 1). After Bonferroni correction, specific group differences were significant between the stacked shock and initial chest compression groups (P < 0.01) and modified stacked shocks and initial chest compression groups (P < 0.01, Table 2). Finally, the incidence of bystander CPR appeared to be significantly greater in the modified stacked shock period following implementation of our resuscitation program (Table 2). Overall hospital CMI for fiscal years 2005/2006 through 2012/2013 were significantly different (1.47 vs 1.71, P < 0.0001).

Demographic and Clinical Data for Study Population
ParameterStacked Shocks (n = 31)Initial Chest Compressions (n = 33)Modified Stack Shocks (n = 42)
  • NOTE: Abbreviations: CPR, cardiopulmonary resuscitation; ICC, initial chest compressions; ICU, intensive care unit; MSS, modified stack shocks; PVT, pulseless ventricular tachycardia; ROSC, return of spontaneous circulation; SS, stacked shocks; VF, ventricular fibrillation. *P < 0.001 versus periods SS and MSS. P < 0.05 versus periods SS and ICC. P < 0.05 versus period MSS. P < 0.01 versus periods SS and MSS. P < 0.001 versus periods SS and ICC.

Age (y)54.364.359.8
Male gender (%)16 (52)21 (64)21 (50)
VF/PVT arrest incidence (per 1,000 admissions)0.49 0.70
Arrest 7 am5 pm (%)15 (48)17 (52)21 (50)
Non‐ICU location (%)13 (42)15 (45)17 (40)
CPR prior to code team arrival (%)22 (71)*31 (94)42 (100)
Perfusing rhythm after initial set of defibrillation attempts (%)373370
Mean defibrillation attempts (no.)1.31.81.5
ROSC (%)765690
Survival‐to‐hospital discharge (%)18 (58)6 (18)30 (71)
Case‐mix index (average coefficient by period)1.511.601.69
Figure 1
Survival to discharge for patients with ventricular fibrillation/ventricular tachycardia arrest from 2005 to 2013. Survival was significantly lower during the initial chest compression (ICC) period as compared to stacked shocks (SS) and modified stacked shock (MSS) periods (P < 0.01).

DISCUSSION

The specific focus of this observation was to report on defibrillation strategies that have previously only been reported in an out‐of‐hospital setting. There is no current consensus regarding chest compressions for a predetermined amount of time prior to defibrillation in an inpatient setting. Here we present data suggesting improved outcomes using an approach that expedited defibrillation and included a defibrillation strategy of stacked shocks (stacked shock and modified stack shock, respectively) in monitored inpatient VF/VT arrest.

Early out‐of‐hospital studies initially demonstrated a significant survival benefit for patients who received 1.5 to 3 minutes of chest compressions preceding defibrillation with reported arrest downtimes of 4 to 5 minutes prior to emergency medical services arrival.[14, 15] However, in more recent randomized controlled trials, outcome was not improved when chest compressions were performed prior to defibrillation attempt.[16, 17] Our findings suggest that there is no one size fits all approach to chest compression and defibrillation strategy. Instead, we suggest that factors including whether the arrest occurred while monitored or not aid with decision making and timing of defibrillation.

Our findings favoring expedited defibrillation and stacked shocks in witnessed arrest are consistent with the 3‐phase model of cardiac arrest proposed by Weisfeldt and Becker suggesting that defibrillation success is related to the energy status of the heart.[18] In this model, the first 4 minutes of VF arrest (electrical phase) are characterized by a high‐energy state with higher adenosine triphosphate (ATP)/adenosine monophosphate (AMP) ratios that are associated with increased likelihood for ROSC after defibrillation attempt.[19] Further, VF appears to deplete ATP/AMP ratios after about 4 minutes, at which point the likelihood of defibrillation success is substantially diminished.[18] Between 4 and 10 minutes (circulatory phase), energy stores in the myocardium are severely depleted. However, there is evidence to suggest that high‐quality chest compressions and high chest compression fractionparticularly in conjunction with epinephrinecan replenish ATP stores and increase the likelihood of defibrillation success.[6, 20] Beyond 10 minutes (metabolic phase), survival rates are abysmal, with no therapy yet identified producing clinical utility.

The secondary analyses reveal several interesting trends. We anticipated a higher number of defibrillation attempts during phase II due to a lower likelihood of conversion with a CPR‐first approach. Instead, the number of shocks was similar across all 3 periods. Our findings are consistent with previous reports of a low single or first shock probability of successful defibrillation. However, recent reports document that approximately 80% of patients who ultimately survive to discharge are successfully defibrillated within the first 3 shocks.[21, 22, 23]

It appears that the likelihood of conversion to a perfusing rhythm is higher with expedited, stacked shocks. This underscores the importance of identifying an optimal approach to the treatment of VF/VT, as the initial series of defibrillation attempts may determine outcomes. There also appeared to be an increase in the incidence of VF/VT during the modified stack shock period, although this was not statistically significant. The modified stack shock period correlated temporally with the expansion of our institution's cardiovascular service and the opening of a dedicated inpatient facility, which likely influenced our mixture of inpatients.

These data should be interpreted with consideration of study limitations. Primarily, we did not attempt to determine arrest times prior to initial defibrillation attempts, which is likely an important variable. However, we limited our population studied only to individuals experiencing VF/VT arrest that was witnessed by hospital care staff or occurred while on cardiac monitor. We are confident that these selective criteria resulted in expedited identification and response times well within the electrical phase. We did not evaluate differences or changes in individual patient‐level severity of illness that may have potentially confounded outcome analysis. The effect of individual level in severity of illness and comorbidity are not known. Instead, we used CMI coefficients to explore hospital wide changes in patient acuity during the study period. We noticed an increasing case‐mix coefficient value suggesting higher patient acuity, which would predict increased mortality rather than the decrease noted between the initial chest compression and modified stacked shock periods (Table 2). In addition, we did not integrate CPR process variables, such as depth, rate, recoil, chest compression fraction, and per‐shock pauses, into this analysis. Our previous studies indicated that high‐quality CPR may account for a significant amount of improvement in outcomes following our novel resuscitation program implementation in 2007.[10, 24] Since the program's inception, we have reported continuous improvement in overall in‐hospital mortality that was sustained throughout the duration of the study period despite the significant changes reported in the 3 periods with monitored VF/VT arrest.[10] The use of medications prior to initial defibrillation attempts was not recorded. We have recently reported that during the same period of data collection, there were no significant changes in the use of epinephrine; however, there was a significant increase in the use of vasopressin.[10] It is unclear whether the increased use of vasopressin contributed to the current outcomes. However, given our cohort of witnessed in‐hospital cardiac arrests with an initial shockable rhythm, we anticipate the use of vasopressors as unlikely prior to defibrillation attempt.

Additional important limitations and potential confounding factors in this study were the use of 2 different types of defibrillators, differing escalating energy strategies, and differing defibrillator waveforms. Recent evidence supports biphasic waveforms as more effective than monophasic waveforms.[25, 26, 27] Comparison of defibrillator brand and waveform superiority is out the scope of this study; however, it is interesting to note similar high rates of survival in the stacked shock and modified stack shock phases despite use of different defibrillator brands and waveforms during those respective phases. Regarding escalating energy of defibrillation countershocks, the most recent 2010 AHA guidelines have no position on the superiority of either manual or automatic escalation.[7] However, we noted similar high rates of survival in the stacked shock and modified stack shock periods despite use of differing escalating strategies. Finally, we used survival‐to‐hospital discharge as our main outcome measure rather than neurological status. However, prior studies from our institution suggest that most VF/VT survivors have good neurological outcomes, which are influenced heavily by preadmission functional status.[24]

CONCLUSIONS

Our data suggest that in cases of monitored VF/VT arrest, expeditious defibrillation with use of stacked shocks is associated with a higher rate of ROSC and survival to hospital discharge

Disclosure: Nothing to report.

Cardiopulmonary arrest (CPA) is a major contributor to overall mortality in both the in‐ and out‐of‐hospital setting.[1, 2, 3] Despite advances in the field of resuscitation science, mortality from CPA remains high.[1, 4] Unlike the out‐of‐hospital environment, inpatient CPA is unique, as trained healthcare providers are the primary responders with a range of expertise available throughout the duration of arrest.

There are inherent opportunities of in‐hospital cardiac arrest that exist, such as the opportunity for near immediate arrest detection, rapid initiation of high‐quality chest compressions, and early defibrillation if indicated. Given the association between improved rates of successful defibrillation and high‐quality chest compressions, the 2005 American Heart Association (AHA) updates changed the recommended guideline ventricular fibrillation/ventricular tachycardia (VF/VT) defibrillation sequence from 3 stacked shocks to a single shock followed by 2 minutes of chest compressions between defibrillation attempts.[5, 6] However, the recommendations were directed primarily at cases of out‐of‐hospital VF/VT CPA, and it currently remains unclear as to whether this strategy offers any advantage to patients who suffer an in‐hospital VF/VT arrest.[7]

Despite the aforementioned findings regarding the benefit of high‐quality chest compressions, there is a paucity of evidence in the medical literature to support whether delivering a period of chest compressions before defibrillation attempt, including initial shock and shock sequence, translate to improved outcomes. With the exception of the statement recommending early defibrillation in case of in‐hospital arrest, there are no formal AHA consensus recommendations.[5, 8, 9] Here we document our experience using the approach of expedited stacked defibrillation shocks in persons experiencing monitored in‐hospital VF/VT arrest.

METHODS

Design

This was a retrospective study of observational data from our in‐hospital resuscitation database. Waiver of informed consent was granted by our institutional investigational review board.

Setting

This study was performed in the University of California San Diego Healthcare System, which includes 2 urban academic hospitals, with a combined total of approximately 500 beds. A designated team is activated in response to code blue requests and includes: code registered nurse (RN), code doctor of medicine (MD), airway MD, respiratory therapist, pharmacist, house nursing supervisor, primary RN, and unit charge RN. Crash carts with defibrillators (ZOLL R and E series; ZOLL Medical Corp., Chelmsford, MA) are located on each inpatient unit. Defibrillator features include real‐time cardiopulmonary resuscitation (CPR) feedback, filtered electrocardiography (ECG), and continuous waveform capnography.

Resuscitation training is provided for all hospital providers as part of the novel Advanced Resuscitation Training (ART) program, which was initiated in 2007.[10] Critical care nurses and physicians receive annual training, whereas noncritical care personnel undergo biennial training. The curriculum is adaptable to institutional treatment algorithms, equipment, and code response. Content is adaptive based on provider type, unit, and opportunities for improvement as revealed by performance improvement data. Resuscitation treatment algorithms are reviewed annually by the Critical Care Committee and Code Blue Subcommittee as part of the ART program, with modifications incorporated into the institutional policies and procedures.

Subjects

All admitted patients with continuous cardiac monitoring who suffered VF/VT arrest between July 2005 and June 2013 were included in this analysis. Patients with active do not attempt resuscitation orders were excluded. Patients were identified from our institutional resuscitation database, into which all in‐hospital cardiopulmonary arrest data are entered. We did not have data on individual patient comorbidity or severity of illness. Overall patient acuity over the course of the study was monitored hospital wide through case‐mix index (CMI). The index is based upon the allocation of hospital resources used to treat a diagnosis‐related group of patients and has previously been used as a surrogate for patient acuity.[11, 12, 13] The code RN who performed the resuscitation is responsible for entering data into a protected performance improvement database. Telecommunications records and the unit log are cross‐referenced to assure complete capture.

Protocols

Specific protocol similarities and differences among the 3 study periods are presented in Table 1.

Institutional In‐hospital Cardiopulmonary Arrest Protocol Variables During the Study Period
Protocol VariableStack Shock Period (20052008)Initial Chest Compression Period (20082011)Modified Stack Shock Period (20112013)
  • NOTE: Abbreviations: IO, intraosseous; IV, intravenous; VF, ventricular fibrillation; VT, ventricular tachycardia. *Only if monitored or witnessed at time of arrest.

Defibrillator typeMedtronic/Physio Control LifePak 12Zoll E SeriesZoll E Series
Joule increment with defibrillation200J‐300J‐360J, manual escalation120J‐150J‐200J, manual escalation120J‐150J‐200J, automatic escalation
Distinction between monitored and unmonitored in‐hospital cardiopulmonary arrestNoYesYes
Chest compressions prior to initial defibrillationNoYesNo*
Initial defibrillation strategy3 expedited stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT2 minutes of chest compressions prior to initial and in between attempts3 expedited stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT*
Chest compression to ventilation ratio15:1Continuous chest compressions with ventilation at ratio 10:1Continuous chest compressions with ventilation at ratio 10:1
VasopressorsEpinephrine 1 mg IV/IO every 35 minutes.Epinephrine 1 mg IV/IO or vasopressin 40 units IV/IO every 35 minutesEpinephrine 1 mg IV/IO or vasopressin 40 units IV/IO every 35 minutes.

Stacked Shock Period (20052008)

Historically, our institutional cardiopulmonary arrest protocols advocated early defibrillation with administration of 3 stacked shocks with a brief pause between each single defibrillation attempt to confirm sustained VF/VT before initiating/resuming chest compressions.

Initial Chest Compression Period (20082011)

In 2008 the protocol was modified to reflect recommendations to perform a 2‐minute period of chest compressions prior to each defibrillation, including the initial attempt.

Modified Stacked Shack Period (20112013)

Finally, in 2011 the protocol was modified again, and defibrillators were configured to allow automatic advancement of defibrillation energy (120J‐150J‐200J). The defibrillation protocol included the following elements.

For an unmonitored arrest, chest compressions and ventilations should be initiated upon recognition of cardiopulmonary arrest. If VF/VT was identified upon placement of defibrillator pads, immediate counter shock was performed and chest compressions resumed immediately for a period of 2 minutes before considering a repeat defibrillation attempt. A dose of epinephrine (1 mg intravenous [IV]/emntraosseous [IO]) or vasopressin (40 units IV/IO) was administered as close to the reinitiation of chest compressions as possible. Defibrillation attempts proceeded with a single shock at a time, each preceded by 2 minutes of chest compressions.

For a monitored arrest, defibrillation attempts were expedited. Chest compressions without ventilations were initiated only until defibrillator pads were placed. Defibrillation attempts were initiated as soon as possible, with at least 3 or more successive shocks administered for persistent VF/VT (stacked shocks). Compressions were performed between shocks if they did not interfere with rhythm analysis. Compressions resumed following the initial series of stacked shocks with persistent CPA, regardless of rhythm, and pressors administered (epinephrine 1 mg IV or vasopressin 40 units IV). Persistent VF/VT received defibrillation attempts every 2 minutes following the initial series of stacked shocks, with compressions performed continuously between attempts. Persistent VF/VT should trigger emergent cardiology consultation for possible emergent percutaneous intervention.

Analysis

The primary outcome measure was defined as survival to hospital discharge at baseline and following each protocol change. 2 was used to compare the 3 time periods, with P < 0.05 defined as statistically significant. Specific group comparisons were made with Bonferroni correction, with P < 0.017 defined as statistically significant. Secondary outcome measures included return of spontaneous circulation (ROSC) and number of shocks required. Demographic and clinical data were also presented for each of the 3 study periods.

RESULTS

A total of 661 cardiopulmonary arrests of all rhythms were identified during the entire study period. Primary VF/VT arrests was identified in 106 patients (16%). Of these, 102 (96%) were being monitored with continuous ECG at the time of arrest. Demographic and clinical information for the entire study cohort are displayed in Table 2. There were no differences in age, gender, time of arrest, and location of arrest between study periods (all P > 0.05). The incidence of VF/VT arrest did not vary significantly between the study periods (P = 0.16). There were no differences in mean number of defibrillation attempts per arrest; however, there was a significant improvement in the rate of perfusing rhythm after initial set of defibrillation attempts and overall ROSC favoring stacked shocks (all P < 0.05, Table 2). Survival‐to‐hospital discharge for all VF/VT arrest victims decreased, then increased significantly from the stacked shock period to initial chest compression period to modified stacked shock period (58%, 18%, 71%, respectively, P < 0.01, Figure 1). After Bonferroni correction, specific group differences were significant between the stacked shock and initial chest compression groups (P < 0.01) and modified stacked shocks and initial chest compression groups (P < 0.01, Table 2). Finally, the incidence of bystander CPR appeared to be significantly greater in the modified stacked shock period following implementation of our resuscitation program (Table 2). Overall hospital CMI for fiscal years 2005/2006 through 2012/2013 were significantly different (1.47 vs 1.71, P < 0.0001).

Demographic and Clinical Data for Study Population
ParameterStacked Shocks (n = 31)Initial Chest Compressions (n = 33)Modified Stack Shocks (n = 42)
  • NOTE: Abbreviations: CPR, cardiopulmonary resuscitation; ICC, initial chest compressions; ICU, intensive care unit; MSS, modified stack shocks; PVT, pulseless ventricular tachycardia; ROSC, return of spontaneous circulation; SS, stacked shocks; VF, ventricular fibrillation. *P < 0.001 versus periods SS and MSS. P < 0.05 versus periods SS and ICC. P < 0.05 versus period MSS. P < 0.01 versus periods SS and MSS. P < 0.001 versus periods SS and ICC.

Age (y)54.364.359.8
Male gender (%)16 (52)21 (64)21 (50)
VF/PVT arrest incidence (per 1,000 admissions)0.49 0.70
Arrest 7 am5 pm (%)15 (48)17 (52)21 (50)
Non‐ICU location (%)13 (42)15 (45)17 (40)
CPR prior to code team arrival (%)22 (71)*31 (94)42 (100)
Perfusing rhythm after initial set of defibrillation attempts (%)373370
Mean defibrillation attempts (no.)1.31.81.5
ROSC (%)765690
Survival‐to‐hospital discharge (%)18 (58)6 (18)30 (71)
Case‐mix index (average coefficient by period)1.511.601.69
Figure 1
Survival to discharge for patients with ventricular fibrillation/ventricular tachycardia arrest from 2005 to 2013. Survival was significantly lower during the initial chest compression (ICC) period as compared to stacked shocks (SS) and modified stacked shock (MSS) periods (P < 0.01).

DISCUSSION

The specific focus of this observation was to report on defibrillation strategies that have previously only been reported in an out‐of‐hospital setting. There is no current consensus regarding chest compressions for a predetermined amount of time prior to defibrillation in an inpatient setting. Here we present data suggesting improved outcomes using an approach that expedited defibrillation and included a defibrillation strategy of stacked shocks (stacked shock and modified stack shock, respectively) in monitored inpatient VF/VT arrest.

Early out‐of‐hospital studies initially demonstrated a significant survival benefit for patients who received 1.5 to 3 minutes of chest compressions preceding defibrillation with reported arrest downtimes of 4 to 5 minutes prior to emergency medical services arrival.[14, 15] However, in more recent randomized controlled trials, outcome was not improved when chest compressions were performed prior to defibrillation attempt.[16, 17] Our findings suggest that there is no one size fits all approach to chest compression and defibrillation strategy. Instead, we suggest that factors including whether the arrest occurred while monitored or not aid with decision making and timing of defibrillation.

Our findings favoring expedited defibrillation and stacked shocks in witnessed arrest are consistent with the 3‐phase model of cardiac arrest proposed by Weisfeldt and Becker suggesting that defibrillation success is related to the energy status of the heart.[18] In this model, the first 4 minutes of VF arrest (electrical phase) are characterized by a high‐energy state with higher adenosine triphosphate (ATP)/adenosine monophosphate (AMP) ratios that are associated with increased likelihood for ROSC after defibrillation attempt.[19] Further, VF appears to deplete ATP/AMP ratios after about 4 minutes, at which point the likelihood of defibrillation success is substantially diminished.[18] Between 4 and 10 minutes (circulatory phase), energy stores in the myocardium are severely depleted. However, there is evidence to suggest that high‐quality chest compressions and high chest compression fractionparticularly in conjunction with epinephrinecan replenish ATP stores and increase the likelihood of defibrillation success.[6, 20] Beyond 10 minutes (metabolic phase), survival rates are abysmal, with no therapy yet identified producing clinical utility.

The secondary analyses reveal several interesting trends. We anticipated a higher number of defibrillation attempts during phase II due to a lower likelihood of conversion with a CPR‐first approach. Instead, the number of shocks was similar across all 3 periods. Our findings are consistent with previous reports of a low single or first shock probability of successful defibrillation. However, recent reports document that approximately 80% of patients who ultimately survive to discharge are successfully defibrillated within the first 3 shocks.[21, 22, 23]

It appears that the likelihood of conversion to a perfusing rhythm is higher with expedited, stacked shocks. This underscores the importance of identifying an optimal approach to the treatment of VF/VT, as the initial series of defibrillation attempts may determine outcomes. There also appeared to be an increase in the incidence of VF/VT during the modified stack shock period, although this was not statistically significant. The modified stack shock period correlated temporally with the expansion of our institution's cardiovascular service and the opening of a dedicated inpatient facility, which likely influenced our mixture of inpatients.

These data should be interpreted with consideration of study limitations. Primarily, we did not attempt to determine arrest times prior to initial defibrillation attempts, which is likely an important variable. However, we limited our population studied only to individuals experiencing VF/VT arrest that was witnessed by hospital care staff or occurred while on cardiac monitor. We are confident that these selective criteria resulted in expedited identification and response times well within the electrical phase. We did not evaluate differences or changes in individual patient‐level severity of illness that may have potentially confounded outcome analysis. The effect of individual level in severity of illness and comorbidity are not known. Instead, we used CMI coefficients to explore hospital wide changes in patient acuity during the study period. We noticed an increasing case‐mix coefficient value suggesting higher patient acuity, which would predict increased mortality rather than the decrease noted between the initial chest compression and modified stacked shock periods (Table 2). In addition, we did not integrate CPR process variables, such as depth, rate, recoil, chest compression fraction, and per‐shock pauses, into this analysis. Our previous studies indicated that high‐quality CPR may account for a significant amount of improvement in outcomes following our novel resuscitation program implementation in 2007.[10, 24] Since the program's inception, we have reported continuous improvement in overall in‐hospital mortality that was sustained throughout the duration of the study period despite the significant changes reported in the 3 periods with monitored VF/VT arrest.[10] The use of medications prior to initial defibrillation attempts was not recorded. We have recently reported that during the same period of data collection, there were no significant changes in the use of epinephrine; however, there was a significant increase in the use of vasopressin.[10] It is unclear whether the increased use of vasopressin contributed to the current outcomes. However, given our cohort of witnessed in‐hospital cardiac arrests with an initial shockable rhythm, we anticipate the use of vasopressors as unlikely prior to defibrillation attempt.

Additional important limitations and potential confounding factors in this study were the use of 2 different types of defibrillators, differing escalating energy strategies, and differing defibrillator waveforms. Recent evidence supports biphasic waveforms as more effective than monophasic waveforms.[25, 26, 27] Comparison of defibrillator brand and waveform superiority is out the scope of this study; however, it is interesting to note similar high rates of survival in the stacked shock and modified stack shock phases despite use of different defibrillator brands and waveforms during those respective phases. Regarding escalating energy of defibrillation countershocks, the most recent 2010 AHA guidelines have no position on the superiority of either manual or automatic escalation.[7] However, we noted similar high rates of survival in the stacked shock and modified stack shock periods despite use of differing escalating strategies. Finally, we used survival‐to‐hospital discharge as our main outcome measure rather than neurological status. However, prior studies from our institution suggest that most VF/VT survivors have good neurological outcomes, which are influenced heavily by preadmission functional status.[24]

CONCLUSIONS

Our data suggest that in cases of monitored VF/VT arrest, expeditious defibrillation with use of stacked shocks is associated with a higher rate of ROSC and survival to hospital discharge

Disclosure: Nothing to report.

References
  1. Morrison LJ, Neumar RW, Zimmerman JL, et al. Strategies for improving survival after in‐hospital cardiac arrest in the United States: 2013 consensus recommendations: a consensus statement from the American Heart Association. Circulation. 2013;127:15381563.
  2. Peberdy MA, Ornato JP, Larkin GL, et al. Survival from in‐hospital cardiac arrest during nights and weekends. JAMA. 2008;299:785792.
  3. Rosamond W, Flegal K, Furie K, et al. Heart disease and stroke statistics—2008 update: a report from the American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Circulation. 2008;117:e25e146.
  4. Sasson C, Rogers MA, Dahl J, Kellermann AL. Predictors of survival from out‐of‐hospital cardiac arrest: a systematic review and meta‐analysis. Circ Cardiovasc Qual Outcomes. 2010;3:6381.
  5. Abella BS, Alvarado JP, Myklebust H, et al. Quality of cardiopulmonary resuscitation during in‐hospital cardiac arrest. JAMA. 2005;293:305310.
  6. Christenson J, Andrusiek D, Everson‐Stewart S, et al. Chest compression fraction determines survival in patients with out‐of‐hospital ventricular fibrillation. Circulation. 2009;120:12411247.
  7. Jacobs I, Sunde K, Deakin CD, et al. Part 6: Defibrillation: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Circulation. 2010;122:S325S337.
  8. Field JM, Hazinski MF, Sayre MR, et al. Part 1: executive summary: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2010;122:S640S656.
  9. Neumar RW, Otto CW, Link MS, et al. Part 8: adult advanced cardiovascular life support: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2010;122:S729S767.
  10. Davis DP, Graham PG, Husa RD, et al. A performance improvement‐based resuscitation programme reduces arrest incidence and increases survival from in‐hospital cardiac arrest. Resuscitation. 2015;92:6369.
  11. Averill RF. The evolution of case‐mix measurement using DRGs: past, present and future. Stud Health Technol Inform. 1994;14:7583.
  12. Merchant RM, Yang L, Becker LB, et al. Variability in case‐mix adjusted in‐hospital cardiac arrest rates. Med Care. 2012;50:124130.
  13. Timbie JW, Hussey PS, Adams JL, Ruder TW, Mehrotra A. Impact of socioeconomic adjustment on physicians' relative cost of care. Med Care. 2013;51:454460.
  14. Cobb LA, Fahrenbruch CE, Walsh TR, et al. Influence of cardiopulmonary resuscitation prior to defibrillation in patients with out‐of‐hospital ventricular fibrillation. JAMA. 1999;281:11821188.
  15. Wik L, Hansen TB, Fylling F, et al. Delaying defibrillation to give basic cardiopulmonary resuscitation to patients with out‐of‐hospital ventricular fibrillation: a randomized trial. JAMA. 2003;289:13891395.
  16. Baker PW, Conway J, Cotton C, et al. Defibrillation or cardiopulmonary resuscitation first for patients with out‐of‐hospital cardiac arrests found by paramedics to be in ventricular fibrillation? A randomised control trial. Resuscitation. 2008;79:424431.
  17. Jacobs IG, Finn JC, Oxer HF, Jelinek GA. CPR before defibrillation in out‐of‐hospital cardiac arrest: a randomized trial. Emerg Med Australas. 2005;17:3945.
  18. Weisfeldt ML, Becker LB. Resuscitation after cardiac arrest: a 3‐phase time‐sensitive model. JAMA. 2002;288:30353038.
  19. Salcido DD, Menegazzi JJ, Suffoletto BP, Logue ES, Sherman LD. Association of intramyocardial high energy phosphate concentrations with quantitative measures of the ventricular fibrillation electrocardiogram waveform. Resuscitation. 2009;80:946950.
  20. Holzer M, Behringer W, Sterz F, et al. Ventricular fibrillation median frequency may not be useful for monitoring during cardiac arrest treated with endothelin‐1 or epinephrine. Anesth Analg. 2004;99:17871793, table of contents.
  21. Eftestøl T, Sunde K, Aase SO, Husøy JH, Steen PA. “Probability of successful defibrillation” as a monitor during CPR in out‐of‐hospital cardiac arrested patients. Resuscitation. 2015;48:245254.
  22. Rodriguez‐Nunez A, Lopez‐Herce J, Castillo J, Bellon JM. Shockable rhythms and defibrillation during in‐hospital pediatric cardiac arrest. Resuscitation. 2014;85:387391.
  23. Tomkins WG, Swain AH, Bailey M, Larsen PD. Beyond the pre‐shock pause: the effect of prehospital defibrillation mode on CPR interruptions and return of spontaneous circulation. Resuscitation. 2013;84:575579.
  24. Sell RE, Lawrence B, Davis DP. Implementing a “resuscitation bundle” decreases incidence and improves outcomes in inpatient cardiopulmonary arrest. Circulation 2009;120(18 Suppl):S1441.
  25. Schneider T, Martens PR, Paschen H, et al. Multicenter, randomized, controlled trial of 150‐J biphasic shocks compared with 200‐ to 360‐J monophasic shocks in the resuscitation of out‐of‐hospital cardiac arrest victims. Optimized Response to Cardiac Arrest (ORCA) Investigators. Circulation. 2000;102:17801787.
  26. Alem AP, Chapman FW, Lank P, Hart AA, Koster RW. A prospective, randomised and blinded comparison of first shock success of monophasic and biphasic waveforms in out‐of‐hospital cardiac arrest. Resuscitation. 2003;58:1724.
  27. Morrison LJ, Dorian P, Long J, et al. Out‐of‐hospital cardiac arrest rectilinear biphasic to monophasic damped sine defibrillation waveforms with advanced life support intervention trial (ORBIT). Resuscitation. 2005;66:149157.
References
  1. Morrison LJ, Neumar RW, Zimmerman JL, et al. Strategies for improving survival after in‐hospital cardiac arrest in the United States: 2013 consensus recommendations: a consensus statement from the American Heart Association. Circulation. 2013;127:15381563.
  2. Peberdy MA, Ornato JP, Larkin GL, et al. Survival from in‐hospital cardiac arrest during nights and weekends. JAMA. 2008;299:785792.
  3. Rosamond W, Flegal K, Furie K, et al. Heart disease and stroke statistics—2008 update: a report from the American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Circulation. 2008;117:e25e146.
  4. Sasson C, Rogers MA, Dahl J, Kellermann AL. Predictors of survival from out‐of‐hospital cardiac arrest: a systematic review and meta‐analysis. Circ Cardiovasc Qual Outcomes. 2010;3:6381.
  5. Abella BS, Alvarado JP, Myklebust H, et al. Quality of cardiopulmonary resuscitation during in‐hospital cardiac arrest. JAMA. 2005;293:305310.
  6. Christenson J, Andrusiek D, Everson‐Stewart S, et al. Chest compression fraction determines survival in patients with out‐of‐hospital ventricular fibrillation. Circulation. 2009;120:12411247.
  7. Jacobs I, Sunde K, Deakin CD, et al. Part 6: Defibrillation: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Circulation. 2010;122:S325S337.
  8. Field JM, Hazinski MF, Sayre MR, et al. Part 1: executive summary: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2010;122:S640S656.
  9. Neumar RW, Otto CW, Link MS, et al. Part 8: adult advanced cardiovascular life support: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2010;122:S729S767.
  10. Davis DP, Graham PG, Husa RD, et al. A performance improvement‐based resuscitation programme reduces arrest incidence and increases survival from in‐hospital cardiac arrest. Resuscitation. 2015;92:6369.
  11. Averill RF. The evolution of case‐mix measurement using DRGs: past, present and future. Stud Health Technol Inform. 1994;14:7583.
  12. Merchant RM, Yang L, Becker LB, et al. Variability in case‐mix adjusted in‐hospital cardiac arrest rates. Med Care. 2012;50:124130.
  13. Timbie JW, Hussey PS, Adams JL, Ruder TW, Mehrotra A. Impact of socioeconomic adjustment on physicians' relative cost of care. Med Care. 2013;51:454460.
  14. Cobb LA, Fahrenbruch CE, Walsh TR, et al. Influence of cardiopulmonary resuscitation prior to defibrillation in patients with out‐of‐hospital ventricular fibrillation. JAMA. 1999;281:11821188.
  15. Wik L, Hansen TB, Fylling F, et al. Delaying defibrillation to give basic cardiopulmonary resuscitation to patients with out‐of‐hospital ventricular fibrillation: a randomized trial. JAMA. 2003;289:13891395.
  16. Baker PW, Conway J, Cotton C, et al. Defibrillation or cardiopulmonary resuscitation first for patients with out‐of‐hospital cardiac arrests found by paramedics to be in ventricular fibrillation? A randomised control trial. Resuscitation. 2008;79:424431.
  17. Jacobs IG, Finn JC, Oxer HF, Jelinek GA. CPR before defibrillation in out‐of‐hospital cardiac arrest: a randomized trial. Emerg Med Australas. 2005;17:3945.
  18. Weisfeldt ML, Becker LB. Resuscitation after cardiac arrest: a 3‐phase time‐sensitive model. JAMA. 2002;288:30353038.
  19. Salcido DD, Menegazzi JJ, Suffoletto BP, Logue ES, Sherman LD. Association of intramyocardial high energy phosphate concentrations with quantitative measures of the ventricular fibrillation electrocardiogram waveform. Resuscitation. 2009;80:946950.
  20. Holzer M, Behringer W, Sterz F, et al. Ventricular fibrillation median frequency may not be useful for monitoring during cardiac arrest treated with endothelin‐1 or epinephrine. Anesth Analg. 2004;99:17871793, table of contents.
  21. Eftestøl T, Sunde K, Aase SO, Husøy JH, Steen PA. “Probability of successful defibrillation” as a monitor during CPR in out‐of‐hospital cardiac arrested patients. Resuscitation. 2015;48:245254.
  22. Rodriguez‐Nunez A, Lopez‐Herce J, Castillo J, Bellon JM. Shockable rhythms and defibrillation during in‐hospital pediatric cardiac arrest. Resuscitation. 2014;85:387391.
  23. Tomkins WG, Swain AH, Bailey M, Larsen PD. Beyond the pre‐shock pause: the effect of prehospital defibrillation mode on CPR interruptions and return of spontaneous circulation. Resuscitation. 2013;84:575579.
  24. Sell RE, Lawrence B, Davis DP. Implementing a “resuscitation bundle” decreases incidence and improves outcomes in inpatient cardiopulmonary arrest. Circulation 2009;120(18 Suppl):S1441.
  25. Schneider T, Martens PR, Paschen H, et al. Multicenter, randomized, controlled trial of 150‐J biphasic shocks compared with 200‐ to 360‐J monophasic shocks in the resuscitation of out‐of‐hospital cardiac arrest victims. Optimized Response to Cardiac Arrest (ORCA) Investigators. Circulation. 2000;102:17801787.
  26. Alem AP, Chapman FW, Lank P, Hart AA, Koster RW. A prospective, randomised and blinded comparison of first shock success of monophasic and biphasic waveforms in out‐of‐hospital cardiac arrest. Resuscitation. 2003;58:1724.
  27. Morrison LJ, Dorian P, Long J, et al. Out‐of‐hospital cardiac arrest rectilinear biphasic to monophasic damped sine defibrillation waveforms with advanced life support intervention trial (ORBIT). Resuscitation. 2005;66:149157.
Issue
Journal of Hospital Medicine - 11(4)
Issue
Journal of Hospital Medicine - 11(4)
Page Number
264-268
Page Number
264-268
Article Type
Display Headline
A focused investigation of expedited, stack of three shocks versus chest compressions first followed by single shocks for monitored ventricular fibrillation/ventricular tachycardia cardiopulmonary arrest in an in‐hospital setting
Display Headline
A focused investigation of expedited, stack of three shocks versus chest compressions first followed by single shocks for monitored ventricular fibrillation/ventricular tachycardia cardiopulmonary arrest in an in‐hospital setting
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Steve A. Aguilar, MD, UCSD Center for Resuscitation Science, Department of Emergency Medicine, 200 W. Arbor Dr. #8676, San Diego, CA 92103; Telephone: 619‐528‐5164; Fax: 619‐528‐6024; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Encouraging Use of the MyFitnessPal App Does Not Lead to Weight Loss in Primary Care Patients

Article Type
Changed
Tue, 03/06/2018 - 10:46
Display Headline
Encouraging Use of the MyFitnessPal App Does Not Lead to Weight Loss in Primary Care Patients

Study Overview

Objective. To evaluate the effectiveness and impact of using MyFitnessPal, a free, popular smartphone application (“app”), for weight loss.

Study design. 2-arm randomized controlled trial.

Setting and participants. Participants were recruited from 2 primary care clinics in the University of California, Los Angeles heath system. The inclusion criteria for the study were ≥ 18 years of age, body mass index (BMI) ≥ 25 kg/m2, an interest in losing weight, and ownership of a smartphone. The exclusion criteria included pregnancy, hemodialysis, life expectancy less than 6 months, lack of interest in weight loss, and current use of a smartphone app for weight loss. Out of 633 individuals assessed, 212 were eligible for the study. Participants were block randomized by BMI 25–30 kg/m2 and BMI > 30 kg/m2 to either usual primary care (n = 107) or usual primary care plus the app (n = 105).

Intervention. MyFitnessPal (MFP) was selected for this study based on previous focus groups with overweight primary care patients. MFP is a calorie-counting app that incorporates evidence-based and theory-based approaches to weight loss. Users can enter their current weight, goal weight, and goal rate of weight loss, which allows the app to generate the user’s daily, individualized calorie goal. MFP users also input daily weight, food intake, and physical activity, which produce certain outputs, including calorie counts, weight trends, and nutritional summaries based on food consumed.

Participants in the intervention arm received help from research assistants in downloading MFP onto their smartphones and received a phone call 1 week after enrollment to assist with any technical issues with the app. Those in the control group were told to choose any preferred activity for weight loss. Both groups received usual care from their primary care provider, with an additional two follow-up visits at 3 and 6 months. At the 3-month follow-up visit, all participants received a nutrition educational handout from www.myplate.gov.

Main outcomes measures. The main outcome measure was weight change at 6 months. Blood pressure, weight, systolic blood pressure (SPB), and 3 self-reported behavioral mediators of weight loss (exercise, diet, and self-efficacy in weight loss) were measured and collected for all participants at baseline and at 3 and 6 months. This study also gathered data from the MyFitnessPal company to measure frequency of app usage. At the 6-month follow-up visit, research assistants asked participants in the intervention arm about their experience using MFP, while those in the control group were asked if they had used MFP in the past 6 months to assess contamination. The authors used a linear mixed effects model (PROC MIXED) in SAS data processing software to investigate the differences in weight change, SBP change, and change in behavioral survey items between the 2 groups while controlling for clinic site. In addition, they performed 2 sensitivity analyses to evaluate the impact of possible informative dropout (income, education, diet experience, treatment group, and baseline value), and the effect of excluding one control group outlier.

Results. The majority of participants were female (73%) with a mean age of 43.4 years (SD = 14.3). The mean BMI was 33.4 kg/m2 (SD = 7.09), and 48% of the participants identified themselves as non-Hispanic white. At the 3-month visit, 26% and 21% of the participants from the intervention and control arms were lost to follow-up. Additionally, at 6 months, 32% and 19% of intervention and control group participants were lost to follow-up.

There was no significant difference in weight change between the two groups at 3 months (control, + 0.54 lb; intervention, –0.06 lb; P = 0.53) or at 6 months (control, +0.6 lb; intervention, –0.07 lb; P = 0.63); between group difference at 3 months was –0.6 lb (95% confidence interval [CI], –2.5 to 1.3 lb; P = 0.53) and at 6 months was –0.67 lb (CI, –3.3 to 2.1 lb; P = 0.63). The sensitivity analysis based on possible missing data also suggested the same outcome with between group difference at 6 months at 0.08 lb (CI, –3.04 to 3.20 lb; P = 0.96). The difference in systolic blood pressure was not significant between the groups.

Participants in the intervention arm used a personal calorie goal more often than those in the control group, with a mean between group difference at 3 months of 1.9 days per week (CI, 1.0 to 2.8; P < 0.001) and a mean between group difference at 6 months of 2.0 days per week (CI, 1.1 to 2.9; P < 0.001). The results also showed that the use of calorie goal feature was significant at 3 months (P < 0.001) and at 6 months (P < 0.001). At the 3-month visit, the authors found that individuals in the intervention group reported decreased self-efficacy in achieving their weight loss goals when compared to their counterparts (–0.85 on a 10-point scale; CI, –1.6 to –0.1; = 0.026), but self-efficacy was insignificant at the 6-month follow-up. Additionally, the results suggested no difference in self-reported behaviors regarding diet, exercise, and self-efficacy in weight loss between the groups.

The mean number of logins was 61 during the course of the study, and the median total logins was 19. Interestingly, the data showed that there was a rapid decline of logins after enrollment for most participants in the intervention arm. There were 94 users who logged in to the app during the first month and 34 who logged in during the last month of the study. Out of 107 participants from the control group, 14 used MFP during the trial.

Despite a sharp decline in usage, MFP users in the intervention group were satisfied with the app: 79% were somewhat to completely satisfied, 92% would recommend it to a friend, and 80% planned to continue using MFP after the study. The study indicated that there were several aspects that the users liked about MFP including ease of use (100%), feedback on progress (88%), and 48% indicated that it was fun to use. Fewer participants appreciated features such as the reminder feature (42%) and social networking feature (13%). A common theme the authors found in MFP users was increased awareness of food choices, and more caution about food choices. Of those that stopped using MFP, some comments regarding MFP included that it was tedious (84%) and not easy to use (24%).

Conclusion. While most participants were satisfied with MFP, encouraging use of the app did not lead to more weight loss in primary care patients compared to usual care. There was decreased engagement with MFP over time.

Commentary

Despite efforts by the federal government to address the obesity epidemic in the United States, there is still a high prevalence of obesity among children and adults. Approximately 17% of youth and 35% of adults in the United States have a BMI in the obese range (> 30 kg/m2) [1]. In addition to a high association between obesity and chronic diseases such as type 2 diabetes mellitus, hypertension, and hypercholesterolemia[2], the obesity epidemic carries a staggering financial burden. The annual cost of obesity in the U.S. was estimated to be $147 billion in 2008, and the medical costs for each person with obesity was $1429 higher than those with normal weight [3,4]. Therefore, finding cost-effective and easily accessible methods to manage obesity is imperative. The Pew Research Center found that 64% of adults in the U.S. own a smartphone, and 62% of those have used their phone to look up health information[5]. Thus, smartphone applications that deliver weight management information and strategies may be a cost-effective and feasible means to reach a large population.

This study assessed the impact of MyFitnessPal, a free and widely popular mobile application, as an approach to reduce weight among patients in a primary care setting. The authors compared weight change between patients who received usual care from primary care providers (PCPs) and those who used MFP in addition to their usual care. They found no significant difference in weight loss between the two groups at 3 and 6 months during the trial. Despite this negative finding, this study makes important contributions to the e-health literature and highlights important considerations for similar studies.

While the exact reasons for lack of effect are unclear, the lack of effectiveness of such a brief intervention is not surprising. It is important to note that the PCPs did not assist or reinforce the patients to use the MFP app in this study. A systematic review of technology-assisted interventions in primary care suggests that technologies such as smartphone apps could be used as a successful tools for weight loss in the primary care setting, but that there needs to be guidance and feedback from a health care team[6]. Further, the intervention may not have been intense enough to achieve clinically significant weight loss. The U.S. Preventive Service Task Force recommends intensive interventions with at least 12 to 26 visits over 12 months to address obesity[7]. Thus, future studies of this app should include more intensive counseling from the healthcare team to determine if this improves adherence to lifestyle changes.

Strengths of this study include the use of a randomized controlled trial design, double-blinding of both participants and PCPs, and analyses for outlier and missing data. These design considerations increase the validity of the trial outcome. In addition, the authors rigorously assessed the relationship between the exposure and the outcome. The group set a high goal of enrollment to account for potential high rate of attrition (up to 50%). Since there was considerable loss to follow-up at both 3 and 6 months, the authors performed sensitivity analyses to determine if this biased the outcomes. The team assessed crossover contamination at the end of the study by asking participants in the control group if they used MFP during the trial. The authors also conducted a linear regression to examine if baseline self-efficacy was a significant predictor of weight change, controlling for the interaction between self-efficacy and group assignment. Additionally, sensitivity analyses were performed to see if the result from one outlier in the control group who achieved significant weight loss biased the findings.

As one of the first studies to investigate the use of a mobile app for weight loss in the primary care setting, the pragmatic design was impressive. The study showed that introduction to MFP can be done in 5 minutes and that the intervention was highly acceptable to patients. Since research assistants provided minimal guidance to enrolled participants, this study design provided an opportunity to observe the app’s efficacy in the real-world setting for the general public.

Although the authors efficiently designed a valid and pragmatic study, there were several aspects of the study that could be improved. As previously stated by the authors, this study did not explicitly measure the readiness for change, hence it was challenging to accurately and systematically determine the motivation of those in the intervention arm. Additionally, despite the diverse income and race of participants, the majority of the subjects were college-educated. In 2014, only 34% of the U.S. population completed a bachelor’s degree or high, and only 8% completed a master’s or higher degree [8]. This may limit the generalizability of the study, as education level could influence app acceptance and usage behavior.

Applications for Clinical Practice

With a large number of people using smartphones, there are increasing opportunities to use this delivery system to provide weight management information and support strategies for weight loss and lifestyle behavior changes. Previous studies showed that smartphone technology is a promising tool to facilitate weight management[9, 10], but the best practices for implementation of smart phones in the primary setting for weight management are still unknown. Based on this study and previous research done on technology intervention on health, providing the MFP app alone without additional counseling or intervention will not lead to clinically significant weight loss in the majority of patients. Future studies should determine if adding the MFP app is more efficacious when part of a more intensive behavioral intervention. Further studies should also determine whether PCP support in assessing readiness to use health apps and other behavior change technologies increases adherence and weight loss.

—Pich Seekaew, BS, and Melanie Jay MD, MS

References

1. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

2. Clark JM, Brancati FL. The challenge of obesity-related chronic diseases. J Gen Intern Med 2000;15:828–9.

3. Wang CY, Mcpherson K, Marsh T, et al. Health and economic burden of the projected obesity trends in the USA and the UK. Lancet 2011;378:815–25.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: payer-and service-specific estimates. Health Affairs 2009;28:822–31.

5. Smith A. U.S. smartphone use in 2015. Pew Research Center Internet Science Tech RSS. 2015.

6. Levine DM, Savarimuthu S, Squires A, et al. Technology-assisted weight loss interventions in primary care: a systematic review. J Gen Intern Med 2014;30:107–17.

7. Moyer VA. Screening for and management of obesity in adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:373–8.

8. U.S. Department of Education, National Center for Education Statistics. The condition of education 2015 (NCES 2015–144). Educational attainment 2015.

9. Stephens J, Allen J. Mobile phone interventions to increase physical activity and reduce weight. J Cardiovasc Nurs 2013; 28:320–9.

10. Hebden L, Cook A, Van Der Ploeg HP, Allman-Farinelli M. Development of smartphone applications for nutrition and physical activity behavior change. JMIR Res Protoc 2012;1:e9.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Topics
Sections

Study Overview

Objective. To evaluate the effectiveness and impact of using MyFitnessPal, a free, popular smartphone application (“app”), for weight loss.

Study design. 2-arm randomized controlled trial.

Setting and participants. Participants were recruited from 2 primary care clinics in the University of California, Los Angeles heath system. The inclusion criteria for the study were ≥ 18 years of age, body mass index (BMI) ≥ 25 kg/m2, an interest in losing weight, and ownership of a smartphone. The exclusion criteria included pregnancy, hemodialysis, life expectancy less than 6 months, lack of interest in weight loss, and current use of a smartphone app for weight loss. Out of 633 individuals assessed, 212 were eligible for the study. Participants were block randomized by BMI 25–30 kg/m2 and BMI > 30 kg/m2 to either usual primary care (n = 107) or usual primary care plus the app (n = 105).

Intervention. MyFitnessPal (MFP) was selected for this study based on previous focus groups with overweight primary care patients. MFP is a calorie-counting app that incorporates evidence-based and theory-based approaches to weight loss. Users can enter their current weight, goal weight, and goal rate of weight loss, which allows the app to generate the user’s daily, individualized calorie goal. MFP users also input daily weight, food intake, and physical activity, which produce certain outputs, including calorie counts, weight trends, and nutritional summaries based on food consumed.

Participants in the intervention arm received help from research assistants in downloading MFP onto their smartphones and received a phone call 1 week after enrollment to assist with any technical issues with the app. Those in the control group were told to choose any preferred activity for weight loss. Both groups received usual care from their primary care provider, with an additional two follow-up visits at 3 and 6 months. At the 3-month follow-up visit, all participants received a nutrition educational handout from www.myplate.gov.

Main outcomes measures. The main outcome measure was weight change at 6 months. Blood pressure, weight, systolic blood pressure (SPB), and 3 self-reported behavioral mediators of weight loss (exercise, diet, and self-efficacy in weight loss) were measured and collected for all participants at baseline and at 3 and 6 months. This study also gathered data from the MyFitnessPal company to measure frequency of app usage. At the 6-month follow-up visit, research assistants asked participants in the intervention arm about their experience using MFP, while those in the control group were asked if they had used MFP in the past 6 months to assess contamination. The authors used a linear mixed effects model (PROC MIXED) in SAS data processing software to investigate the differences in weight change, SBP change, and change in behavioral survey items between the 2 groups while controlling for clinic site. In addition, they performed 2 sensitivity analyses to evaluate the impact of possible informative dropout (income, education, diet experience, treatment group, and baseline value), and the effect of excluding one control group outlier.

Results. The majority of participants were female (73%) with a mean age of 43.4 years (SD = 14.3). The mean BMI was 33.4 kg/m2 (SD = 7.09), and 48% of the participants identified themselves as non-Hispanic white. At the 3-month visit, 26% and 21% of the participants from the intervention and control arms were lost to follow-up. Additionally, at 6 months, 32% and 19% of intervention and control group participants were lost to follow-up.

There was no significant difference in weight change between the two groups at 3 months (control, + 0.54 lb; intervention, –0.06 lb; P = 0.53) or at 6 months (control, +0.6 lb; intervention, –0.07 lb; P = 0.63); between group difference at 3 months was –0.6 lb (95% confidence interval [CI], –2.5 to 1.3 lb; P = 0.53) and at 6 months was –0.67 lb (CI, –3.3 to 2.1 lb; P = 0.63). The sensitivity analysis based on possible missing data also suggested the same outcome with between group difference at 6 months at 0.08 lb (CI, –3.04 to 3.20 lb; P = 0.96). The difference in systolic blood pressure was not significant between the groups.

Participants in the intervention arm used a personal calorie goal more often than those in the control group, with a mean between group difference at 3 months of 1.9 days per week (CI, 1.0 to 2.8; P < 0.001) and a mean between group difference at 6 months of 2.0 days per week (CI, 1.1 to 2.9; P < 0.001). The results also showed that the use of calorie goal feature was significant at 3 months (P < 0.001) and at 6 months (P < 0.001). At the 3-month visit, the authors found that individuals in the intervention group reported decreased self-efficacy in achieving their weight loss goals when compared to their counterparts (–0.85 on a 10-point scale; CI, –1.6 to –0.1; = 0.026), but self-efficacy was insignificant at the 6-month follow-up. Additionally, the results suggested no difference in self-reported behaviors regarding diet, exercise, and self-efficacy in weight loss between the groups.

The mean number of logins was 61 during the course of the study, and the median total logins was 19. Interestingly, the data showed that there was a rapid decline of logins after enrollment for most participants in the intervention arm. There were 94 users who logged in to the app during the first month and 34 who logged in during the last month of the study. Out of 107 participants from the control group, 14 used MFP during the trial.

Despite a sharp decline in usage, MFP users in the intervention group were satisfied with the app: 79% were somewhat to completely satisfied, 92% would recommend it to a friend, and 80% planned to continue using MFP after the study. The study indicated that there were several aspects that the users liked about MFP including ease of use (100%), feedback on progress (88%), and 48% indicated that it was fun to use. Fewer participants appreciated features such as the reminder feature (42%) and social networking feature (13%). A common theme the authors found in MFP users was increased awareness of food choices, and more caution about food choices. Of those that stopped using MFP, some comments regarding MFP included that it was tedious (84%) and not easy to use (24%).

Conclusion. While most participants were satisfied with MFP, encouraging use of the app did not lead to more weight loss in primary care patients compared to usual care. There was decreased engagement with MFP over time.

Commentary

Despite efforts by the federal government to address the obesity epidemic in the United States, there is still a high prevalence of obesity among children and adults. Approximately 17% of youth and 35% of adults in the United States have a BMI in the obese range (> 30 kg/m2) [1]. In addition to a high association between obesity and chronic diseases such as type 2 diabetes mellitus, hypertension, and hypercholesterolemia[2], the obesity epidemic carries a staggering financial burden. The annual cost of obesity in the U.S. was estimated to be $147 billion in 2008, and the medical costs for each person with obesity was $1429 higher than those with normal weight [3,4]. Therefore, finding cost-effective and easily accessible methods to manage obesity is imperative. The Pew Research Center found that 64% of adults in the U.S. own a smartphone, and 62% of those have used their phone to look up health information[5]. Thus, smartphone applications that deliver weight management information and strategies may be a cost-effective and feasible means to reach a large population.

This study assessed the impact of MyFitnessPal, a free and widely popular mobile application, as an approach to reduce weight among patients in a primary care setting. The authors compared weight change between patients who received usual care from primary care providers (PCPs) and those who used MFP in addition to their usual care. They found no significant difference in weight loss between the two groups at 3 and 6 months during the trial. Despite this negative finding, this study makes important contributions to the e-health literature and highlights important considerations for similar studies.

While the exact reasons for lack of effect are unclear, the lack of effectiveness of such a brief intervention is not surprising. It is important to note that the PCPs did not assist or reinforce the patients to use the MFP app in this study. A systematic review of technology-assisted interventions in primary care suggests that technologies such as smartphone apps could be used as a successful tools for weight loss in the primary care setting, but that there needs to be guidance and feedback from a health care team[6]. Further, the intervention may not have been intense enough to achieve clinically significant weight loss. The U.S. Preventive Service Task Force recommends intensive interventions with at least 12 to 26 visits over 12 months to address obesity[7]. Thus, future studies of this app should include more intensive counseling from the healthcare team to determine if this improves adherence to lifestyle changes.

Strengths of this study include the use of a randomized controlled trial design, double-blinding of both participants and PCPs, and analyses for outlier and missing data. These design considerations increase the validity of the trial outcome. In addition, the authors rigorously assessed the relationship between the exposure and the outcome. The group set a high goal of enrollment to account for potential high rate of attrition (up to 50%). Since there was considerable loss to follow-up at both 3 and 6 months, the authors performed sensitivity analyses to determine if this biased the outcomes. The team assessed crossover contamination at the end of the study by asking participants in the control group if they used MFP during the trial. The authors also conducted a linear regression to examine if baseline self-efficacy was a significant predictor of weight change, controlling for the interaction between self-efficacy and group assignment. Additionally, sensitivity analyses were performed to see if the result from one outlier in the control group who achieved significant weight loss biased the findings.

As one of the first studies to investigate the use of a mobile app for weight loss in the primary care setting, the pragmatic design was impressive. The study showed that introduction to MFP can be done in 5 minutes and that the intervention was highly acceptable to patients. Since research assistants provided minimal guidance to enrolled participants, this study design provided an opportunity to observe the app’s efficacy in the real-world setting for the general public.

Although the authors efficiently designed a valid and pragmatic study, there were several aspects of the study that could be improved. As previously stated by the authors, this study did not explicitly measure the readiness for change, hence it was challenging to accurately and systematically determine the motivation of those in the intervention arm. Additionally, despite the diverse income and race of participants, the majority of the subjects were college-educated. In 2014, only 34% of the U.S. population completed a bachelor’s degree or high, and only 8% completed a master’s or higher degree [8]. This may limit the generalizability of the study, as education level could influence app acceptance and usage behavior.

Applications for Clinical Practice

With a large number of people using smartphones, there are increasing opportunities to use this delivery system to provide weight management information and support strategies for weight loss and lifestyle behavior changes. Previous studies showed that smartphone technology is a promising tool to facilitate weight management[9, 10], but the best practices for implementation of smart phones in the primary setting for weight management are still unknown. Based on this study and previous research done on technology intervention on health, providing the MFP app alone without additional counseling or intervention will not lead to clinically significant weight loss in the majority of patients. Future studies should determine if adding the MFP app is more efficacious when part of a more intensive behavioral intervention. Further studies should also determine whether PCP support in assessing readiness to use health apps and other behavior change technologies increases adherence and weight loss.

—Pich Seekaew, BS, and Melanie Jay MD, MS

Study Overview

Objective. To evaluate the effectiveness and impact of using MyFitnessPal, a free, popular smartphone application (“app”), for weight loss.

Study design. 2-arm randomized controlled trial.

Setting and participants. Participants were recruited from 2 primary care clinics in the University of California, Los Angeles heath system. The inclusion criteria for the study were ≥ 18 years of age, body mass index (BMI) ≥ 25 kg/m2, an interest in losing weight, and ownership of a smartphone. The exclusion criteria included pregnancy, hemodialysis, life expectancy less than 6 months, lack of interest in weight loss, and current use of a smartphone app for weight loss. Out of 633 individuals assessed, 212 were eligible for the study. Participants were block randomized by BMI 25–30 kg/m2 and BMI > 30 kg/m2 to either usual primary care (n = 107) or usual primary care plus the app (n = 105).

Intervention. MyFitnessPal (MFP) was selected for this study based on previous focus groups with overweight primary care patients. MFP is a calorie-counting app that incorporates evidence-based and theory-based approaches to weight loss. Users can enter their current weight, goal weight, and goal rate of weight loss, which allows the app to generate the user’s daily, individualized calorie goal. MFP users also input daily weight, food intake, and physical activity, which produce certain outputs, including calorie counts, weight trends, and nutritional summaries based on food consumed.

Participants in the intervention arm received help from research assistants in downloading MFP onto their smartphones and received a phone call 1 week after enrollment to assist with any technical issues with the app. Those in the control group were told to choose any preferred activity for weight loss. Both groups received usual care from their primary care provider, with an additional two follow-up visits at 3 and 6 months. At the 3-month follow-up visit, all participants received a nutrition educational handout from www.myplate.gov.

Main outcomes measures. The main outcome measure was weight change at 6 months. Blood pressure, weight, systolic blood pressure (SPB), and 3 self-reported behavioral mediators of weight loss (exercise, diet, and self-efficacy in weight loss) were measured and collected for all participants at baseline and at 3 and 6 months. This study also gathered data from the MyFitnessPal company to measure frequency of app usage. At the 6-month follow-up visit, research assistants asked participants in the intervention arm about their experience using MFP, while those in the control group were asked if they had used MFP in the past 6 months to assess contamination. The authors used a linear mixed effects model (PROC MIXED) in SAS data processing software to investigate the differences in weight change, SBP change, and change in behavioral survey items between the 2 groups while controlling for clinic site. In addition, they performed 2 sensitivity analyses to evaluate the impact of possible informative dropout (income, education, diet experience, treatment group, and baseline value), and the effect of excluding one control group outlier.

Results. The majority of participants were female (73%) with a mean age of 43.4 years (SD = 14.3). The mean BMI was 33.4 kg/m2 (SD = 7.09), and 48% of the participants identified themselves as non-Hispanic white. At the 3-month visit, 26% and 21% of the participants from the intervention and control arms were lost to follow-up. Additionally, at 6 months, 32% and 19% of intervention and control group participants were lost to follow-up.

There was no significant difference in weight change between the two groups at 3 months (control, + 0.54 lb; intervention, –0.06 lb; P = 0.53) or at 6 months (control, +0.6 lb; intervention, –0.07 lb; P = 0.63); between group difference at 3 months was –0.6 lb (95% confidence interval [CI], –2.5 to 1.3 lb; P = 0.53) and at 6 months was –0.67 lb (CI, –3.3 to 2.1 lb; P = 0.63). The sensitivity analysis based on possible missing data also suggested the same outcome with between group difference at 6 months at 0.08 lb (CI, –3.04 to 3.20 lb; P = 0.96). The difference in systolic blood pressure was not significant between the groups.

Participants in the intervention arm used a personal calorie goal more often than those in the control group, with a mean between group difference at 3 months of 1.9 days per week (CI, 1.0 to 2.8; P < 0.001) and a mean between group difference at 6 months of 2.0 days per week (CI, 1.1 to 2.9; P < 0.001). The results also showed that the use of calorie goal feature was significant at 3 months (P < 0.001) and at 6 months (P < 0.001). At the 3-month visit, the authors found that individuals in the intervention group reported decreased self-efficacy in achieving their weight loss goals when compared to their counterparts (–0.85 on a 10-point scale; CI, –1.6 to –0.1; = 0.026), but self-efficacy was insignificant at the 6-month follow-up. Additionally, the results suggested no difference in self-reported behaviors regarding diet, exercise, and self-efficacy in weight loss between the groups.

The mean number of logins was 61 during the course of the study, and the median total logins was 19. Interestingly, the data showed that there was a rapid decline of logins after enrollment for most participants in the intervention arm. There were 94 users who logged in to the app during the first month and 34 who logged in during the last month of the study. Out of 107 participants from the control group, 14 used MFP during the trial.

Despite a sharp decline in usage, MFP users in the intervention group were satisfied with the app: 79% were somewhat to completely satisfied, 92% would recommend it to a friend, and 80% planned to continue using MFP after the study. The study indicated that there were several aspects that the users liked about MFP including ease of use (100%), feedback on progress (88%), and 48% indicated that it was fun to use. Fewer participants appreciated features such as the reminder feature (42%) and social networking feature (13%). A common theme the authors found in MFP users was increased awareness of food choices, and more caution about food choices. Of those that stopped using MFP, some comments regarding MFP included that it was tedious (84%) and not easy to use (24%).

Conclusion. While most participants were satisfied with MFP, encouraging use of the app did not lead to more weight loss in primary care patients compared to usual care. There was decreased engagement with MFP over time.

Commentary

Despite efforts by the federal government to address the obesity epidemic in the United States, there is still a high prevalence of obesity among children and adults. Approximately 17% of youth and 35% of adults in the United States have a BMI in the obese range (> 30 kg/m2) [1]. In addition to a high association between obesity and chronic diseases such as type 2 diabetes mellitus, hypertension, and hypercholesterolemia[2], the obesity epidemic carries a staggering financial burden. The annual cost of obesity in the U.S. was estimated to be $147 billion in 2008, and the medical costs for each person with obesity was $1429 higher than those with normal weight [3,4]. Therefore, finding cost-effective and easily accessible methods to manage obesity is imperative. The Pew Research Center found that 64% of adults in the U.S. own a smartphone, and 62% of those have used their phone to look up health information[5]. Thus, smartphone applications that deliver weight management information and strategies may be a cost-effective and feasible means to reach a large population.

This study assessed the impact of MyFitnessPal, a free and widely popular mobile application, as an approach to reduce weight among patients in a primary care setting. The authors compared weight change between patients who received usual care from primary care providers (PCPs) and those who used MFP in addition to their usual care. They found no significant difference in weight loss between the two groups at 3 and 6 months during the trial. Despite this negative finding, this study makes important contributions to the e-health literature and highlights important considerations for similar studies.

While the exact reasons for lack of effect are unclear, the lack of effectiveness of such a brief intervention is not surprising. It is important to note that the PCPs did not assist or reinforce the patients to use the MFP app in this study. A systematic review of technology-assisted interventions in primary care suggests that technologies such as smartphone apps could be used as a successful tools for weight loss in the primary care setting, but that there needs to be guidance and feedback from a health care team[6]. Further, the intervention may not have been intense enough to achieve clinically significant weight loss. The U.S. Preventive Service Task Force recommends intensive interventions with at least 12 to 26 visits over 12 months to address obesity[7]. Thus, future studies of this app should include more intensive counseling from the healthcare team to determine if this improves adherence to lifestyle changes.

Strengths of this study include the use of a randomized controlled trial design, double-blinding of both participants and PCPs, and analyses for outlier and missing data. These design considerations increase the validity of the trial outcome. In addition, the authors rigorously assessed the relationship between the exposure and the outcome. The group set a high goal of enrollment to account for potential high rate of attrition (up to 50%). Since there was considerable loss to follow-up at both 3 and 6 months, the authors performed sensitivity analyses to determine if this biased the outcomes. The team assessed crossover contamination at the end of the study by asking participants in the control group if they used MFP during the trial. The authors also conducted a linear regression to examine if baseline self-efficacy was a significant predictor of weight change, controlling for the interaction between self-efficacy and group assignment. Additionally, sensitivity analyses were performed to see if the result from one outlier in the control group who achieved significant weight loss biased the findings.

As one of the first studies to investigate the use of a mobile app for weight loss in the primary care setting, the pragmatic design was impressive. The study showed that introduction to MFP can be done in 5 minutes and that the intervention was highly acceptable to patients. Since research assistants provided minimal guidance to enrolled participants, this study design provided an opportunity to observe the app’s efficacy in the real-world setting for the general public.

Although the authors efficiently designed a valid and pragmatic study, there were several aspects of the study that could be improved. As previously stated by the authors, this study did not explicitly measure the readiness for change, hence it was challenging to accurately and systematically determine the motivation of those in the intervention arm. Additionally, despite the diverse income and race of participants, the majority of the subjects were college-educated. In 2014, only 34% of the U.S. population completed a bachelor’s degree or high, and only 8% completed a master’s or higher degree [8]. This may limit the generalizability of the study, as education level could influence app acceptance and usage behavior.

Applications for Clinical Practice

With a large number of people using smartphones, there are increasing opportunities to use this delivery system to provide weight management information and support strategies for weight loss and lifestyle behavior changes. Previous studies showed that smartphone technology is a promising tool to facilitate weight management[9, 10], but the best practices for implementation of smart phones in the primary setting for weight management are still unknown. Based on this study and previous research done on technology intervention on health, providing the MFP app alone without additional counseling or intervention will not lead to clinically significant weight loss in the majority of patients. Future studies should determine if adding the MFP app is more efficacious when part of a more intensive behavioral intervention. Further studies should also determine whether PCP support in assessing readiness to use health apps and other behavior change technologies increases adherence and weight loss.

—Pich Seekaew, BS, and Melanie Jay MD, MS

References

1. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

2. Clark JM, Brancati FL. The challenge of obesity-related chronic diseases. J Gen Intern Med 2000;15:828–9.

3. Wang CY, Mcpherson K, Marsh T, et al. Health and economic burden of the projected obesity trends in the USA and the UK. Lancet 2011;378:815–25.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: payer-and service-specific estimates. Health Affairs 2009;28:822–31.

5. Smith A. U.S. smartphone use in 2015. Pew Research Center Internet Science Tech RSS. 2015.

6. Levine DM, Savarimuthu S, Squires A, et al. Technology-assisted weight loss interventions in primary care: a systematic review. J Gen Intern Med 2014;30:107–17.

7. Moyer VA. Screening for and management of obesity in adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:373–8.

8. U.S. Department of Education, National Center for Education Statistics. The condition of education 2015 (NCES 2015–144). Educational attainment 2015.

9. Stephens J, Allen J. Mobile phone interventions to increase physical activity and reduce weight. J Cardiovasc Nurs 2013; 28:320–9.

10. Hebden L, Cook A, Van Der Ploeg HP, Allman-Farinelli M. Development of smartphone applications for nutrition and physical activity behavior change. JMIR Res Protoc 2012;1:e9.

References

1. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

2. Clark JM, Brancati FL. The challenge of obesity-related chronic diseases. J Gen Intern Med 2000;15:828–9.

3. Wang CY, Mcpherson K, Marsh T, et al. Health and economic burden of the projected obesity trends in the USA and the UK. Lancet 2011;378:815–25.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: payer-and service-specific estimates. Health Affairs 2009;28:822–31.

5. Smith A. U.S. smartphone use in 2015. Pew Research Center Internet Science Tech RSS. 2015.

6. Levine DM, Savarimuthu S, Squires A, et al. Technology-assisted weight loss interventions in primary care: a systematic review. J Gen Intern Med 2014;30:107–17.

7. Moyer VA. Screening for and management of obesity in adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:373–8.

8. U.S. Department of Education, National Center for Education Statistics. The condition of education 2015 (NCES 2015–144). Educational attainment 2015.

9. Stephens J, Allen J. Mobile phone interventions to increase physical activity and reduce weight. J Cardiovasc Nurs 2013; 28:320–9.

10. Hebden L, Cook A, Van Der Ploeg HP, Allman-Farinelli M. Development of smartphone applications for nutrition and physical activity behavior change. JMIR Res Protoc 2012;1:e9.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Publications
Topics
Article Type
Display Headline
Encouraging Use of the MyFitnessPal App Does Not Lead to Weight Loss in Primary Care Patients
Display Headline
Encouraging Use of the MyFitnessPal App Does Not Lead to Weight Loss in Primary Care Patients
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

CHA2DS2-VASc Score Modestly Predicts Ischemic Stroke, Thromboembolic Events, and Death in Patients with Heart Failure Without Atrial Fibrillation

Article Type
Changed
Tue, 03/06/2018 - 10:49
Display Headline
CHA2DS2-VASc Score Modestly Predicts Ischemic Stroke, Thromboembolic Events, and Death in Patients with Heart Failure Without Atrial Fibrillation

Study Overview

Objective. To determine if CHA2DS2-VASc score, a score commonly used to assess risk of cerebrovascular events among adults with atrial fibrillation, predicts ischemic stroke, thromboembolism, and death in a cohort of patients with heart failure with and without atrial fibrillation.

Design. Prospective cohort study.

Setting and participants. Patients in Denmark aged 50 years or older discharged with a primary diagnosis of incident heart failure between 1 Jan 2000 and 31 December 2012. Patients with atrial fibrillation were identified by a hospital diagnosis of atrial fibrillation or atrial flutter from 1994 onwards. The study excluded patients treated with vitamin K antagonist within 6 months prior to heart failure diagnosis and patients with a diagnosis of cancer or chronic obstructive pulmonary disease. The study utilized 3 national Danish registries: the National Patient Registry (which records all hospital admissions and diagnoses using ICD-10), the National Prescription Registry (prescription data), and the Civil Registry System (demographics and vital statistics). The registries were linked and have been well validated.

Main outcome measure. The primary outcome measure was defined as a hospital diagnosis of ischemic stroke or thromboembolic events, transient ischemic attack, systemic embolism, pulmonary embolism or myocardial infarction within 1 year after heart failure diagnosis. A secondary outcome measure was all-cause death at 1 year.

Analysis. Patients were risk stratified using the CHA2DS2-VASc score. Patients were given 1 point for congestive heart failure, hypertension, age 65 to 74 years, diabetes mellitus, vascular disease, and female sex and 2 points for age 75 years or older and previous thromboembolic events. The authors conducted a time-to-event analysis to examine the relationship between CHA2DS2-VASc score and the risk of ischemic stroke, thromboembolic event, and death separately among those with atrial fibrillation and without. Patients were censored if they began anticoagulation therapy during follow-up. The properties of CHA2DS2-VASc score in predicting the risk of outcomes were quantified using C statistics. Multiple sensitivity analyses were conducted to account for patients who had a diagnosis of atrial fibrillation shortly after diagnosis of heart failure, to include patients with chronic obstructive pulmonary disease, and split sample analysis by date of heart failure diagnosis was conducted.

Main results. A total of 42,987 patients with incident heart failure during 2000–2012 were included in the cohort, with 21.9% of these having atrial fibrillation at baseline. The median follow-up period was 1.8 years. For patients with heart failure with or without a diagnosis of atrial fibrillation, the 1-year absolute risk for all outcomes were high and increased with increasing CHA2DS2-VASc score. For ischemic stroke and death, absolute risks were higher among patient with heart failure and atrial fibrillation when compared with patients without atrial fibrillation. At high CHA2DS2-VASc score, the risk of thromboembolism was higher among patients without atrial fibrillation when compared with those with atrial fibrillation. CHA2DS2-VASc score predicted the end point of ischemic stroke at 1 and 5 years modestly with C statistics 0.67 and 0.69 among those without atrial fibrillation and 0.64 and 0.71 among those with atrial fibrillation. The negative predictive value for all events at 1 year was around 90% when using a cutoff score of 1 for patients without atrial fibrillation, but only around 75% at 5 years.

Conclusions. Although the CHA2DS2-VASc score was developed to predict ischemic stroke among patients with atrial fibrillation, it also has modest predictive accuracy when applied to patients with heart failure without atrial fibrillation. Among patients with heart failure with a high CHA2DS2-VASc score, the risks of all adverse outcomes were high regardless of whether concomitant atrial fibrillation was present, and the risk of thromboembolism was higher among those without atrial fibrillation than those with concomitant atrial fibrillation. Because of the modest predictive accuracy, the clinical utility of CHA2DS2-VASc among patients with heart failure needs to be further determined.

Commentary

Clinical prediction rules are increasingly relied upon in clinical setting to drive medical decision making, allowing clinicians to weigh risks and benefits of interventions in a concrete, evidence-based manner [1]. The CHA2DS2-VASc score, endorsed in guidelines for assessing risk of stroke among patients with atrial fibrillation, is widely used in clinical practice [2,3] in helping make decisions about treatment, such as use of anticoagulation. The use of the clinical prediction rule for patients with heart failure but without atrial fibrillation is a novel application of the widely used rule. The rationale is that the CHA2DS2-VASc score includes within it a cluster of stroke risk factors that increases risk of stroke whether atrial fibrillation is present or not and thus perhaps capture stroke risk beyond whether a patient has atrial fibrillation [4]. The authors selected a patient group with high rate of mortality—those with incident heart failure—to evaluate the hypothesis that the CHA2DS2-VASc score could predict stroke outcomes in heart failure patients without atrial fibrillation in a manner similar to that in atrial fibrillation populations, and that at high CHA2DS2-VASc scores, the risk for stroke would be comparable among heart failure patients.

What the authors found is that the scoring algorithm was able to predict stroke occurrence modestly whether or not atrial fibrillation was present, and that stroke risk was high among those at the highest scores regardless of whether patients had atrial fibrillation. These findings underscore the potential use of the scoring algorithm beyond the population with atrial fibrillation, and also highlighted the need for further research in the highest risk group of heart failure patients without atrial fibrillation to determine whether anticoagulation may reduce stroke risk in this population. Minor study limitations included the use of an administrative dataset, in which diagnosis information may be incomplete or erroneous, and the potential limited generalizability of the study, given differences in the makeup of the Danish study population compared with other populations.

Applications for Clinical Practice

The study explores the use of clinical predication rule beyond the condition for which it is developed and found that particularly in high-risk groups, risk scores still predicted adverse events, albeit modestly. For clinicians, it highlights both the utility of the risk scores and the current gap in knowledge about stroke prevention in the highest-risk group of patients without atrial fibrillation. Further studies are needed to determine if anticoagulation therapy applies to this high-risk group for stroke prevention.

 —William W. Hung, MD, MPH

References

1. McGinn TG, Guyatt GH, Wyer PC, et al. Users’ guides to the medical literature: XXII: how to use articles about clinical decision rules. Evidence-Based Medicine Working Group. JAMA 2000;284:79–84.

2. January CT, Wann LS, Alpert JS, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Heart Rhythm Society. J Am Coll Cardiol 2014;64:e1–e76.

3. Camm AJ, Lip GY, De Caterina R, et al; ESC Committee for Practice Guidelines. 2012 Focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation. Eur Heart J 2012;33:2719–47.

4. Mitchell LB, Southern DA, Galbraith D, et al; APPROACH Investigators. Prediction of stroke or TIA in patients without atrial fibrillation using CHADS2 and CHA2DS2-VASc scores. Heart 2014;100:1524–30.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Topics
Sections

Study Overview

Objective. To determine if CHA2DS2-VASc score, a score commonly used to assess risk of cerebrovascular events among adults with atrial fibrillation, predicts ischemic stroke, thromboembolism, and death in a cohort of patients with heart failure with and without atrial fibrillation.

Design. Prospective cohort study.

Setting and participants. Patients in Denmark aged 50 years or older discharged with a primary diagnosis of incident heart failure between 1 Jan 2000 and 31 December 2012. Patients with atrial fibrillation were identified by a hospital diagnosis of atrial fibrillation or atrial flutter from 1994 onwards. The study excluded patients treated with vitamin K antagonist within 6 months prior to heart failure diagnosis and patients with a diagnosis of cancer or chronic obstructive pulmonary disease. The study utilized 3 national Danish registries: the National Patient Registry (which records all hospital admissions and diagnoses using ICD-10), the National Prescription Registry (prescription data), and the Civil Registry System (demographics and vital statistics). The registries were linked and have been well validated.

Main outcome measure. The primary outcome measure was defined as a hospital diagnosis of ischemic stroke or thromboembolic events, transient ischemic attack, systemic embolism, pulmonary embolism or myocardial infarction within 1 year after heart failure diagnosis. A secondary outcome measure was all-cause death at 1 year.

Analysis. Patients were risk stratified using the CHA2DS2-VASc score. Patients were given 1 point for congestive heart failure, hypertension, age 65 to 74 years, diabetes mellitus, vascular disease, and female sex and 2 points for age 75 years or older and previous thromboembolic events. The authors conducted a time-to-event analysis to examine the relationship between CHA2DS2-VASc score and the risk of ischemic stroke, thromboembolic event, and death separately among those with atrial fibrillation and without. Patients were censored if they began anticoagulation therapy during follow-up. The properties of CHA2DS2-VASc score in predicting the risk of outcomes were quantified using C statistics. Multiple sensitivity analyses were conducted to account for patients who had a diagnosis of atrial fibrillation shortly after diagnosis of heart failure, to include patients with chronic obstructive pulmonary disease, and split sample analysis by date of heart failure diagnosis was conducted.

Main results. A total of 42,987 patients with incident heart failure during 2000–2012 were included in the cohort, with 21.9% of these having atrial fibrillation at baseline. The median follow-up period was 1.8 years. For patients with heart failure with or without a diagnosis of atrial fibrillation, the 1-year absolute risk for all outcomes were high and increased with increasing CHA2DS2-VASc score. For ischemic stroke and death, absolute risks were higher among patient with heart failure and atrial fibrillation when compared with patients without atrial fibrillation. At high CHA2DS2-VASc score, the risk of thromboembolism was higher among patients without atrial fibrillation when compared with those with atrial fibrillation. CHA2DS2-VASc score predicted the end point of ischemic stroke at 1 and 5 years modestly with C statistics 0.67 and 0.69 among those without atrial fibrillation and 0.64 and 0.71 among those with atrial fibrillation. The negative predictive value for all events at 1 year was around 90% when using a cutoff score of 1 for patients without atrial fibrillation, but only around 75% at 5 years.

Conclusions. Although the CHA2DS2-VASc score was developed to predict ischemic stroke among patients with atrial fibrillation, it also has modest predictive accuracy when applied to patients with heart failure without atrial fibrillation. Among patients with heart failure with a high CHA2DS2-VASc score, the risks of all adverse outcomes were high regardless of whether concomitant atrial fibrillation was present, and the risk of thromboembolism was higher among those without atrial fibrillation than those with concomitant atrial fibrillation. Because of the modest predictive accuracy, the clinical utility of CHA2DS2-VASc among patients with heart failure needs to be further determined.

Commentary

Clinical prediction rules are increasingly relied upon in clinical setting to drive medical decision making, allowing clinicians to weigh risks and benefits of interventions in a concrete, evidence-based manner [1]. The CHA2DS2-VASc score, endorsed in guidelines for assessing risk of stroke among patients with atrial fibrillation, is widely used in clinical practice [2,3] in helping make decisions about treatment, such as use of anticoagulation. The use of the clinical prediction rule for patients with heart failure but without atrial fibrillation is a novel application of the widely used rule. The rationale is that the CHA2DS2-VASc score includes within it a cluster of stroke risk factors that increases risk of stroke whether atrial fibrillation is present or not and thus perhaps capture stroke risk beyond whether a patient has atrial fibrillation [4]. The authors selected a patient group with high rate of mortality—those with incident heart failure—to evaluate the hypothesis that the CHA2DS2-VASc score could predict stroke outcomes in heart failure patients without atrial fibrillation in a manner similar to that in atrial fibrillation populations, and that at high CHA2DS2-VASc scores, the risk for stroke would be comparable among heart failure patients.

What the authors found is that the scoring algorithm was able to predict stroke occurrence modestly whether or not atrial fibrillation was present, and that stroke risk was high among those at the highest scores regardless of whether patients had atrial fibrillation. These findings underscore the potential use of the scoring algorithm beyond the population with atrial fibrillation, and also highlighted the need for further research in the highest risk group of heart failure patients without atrial fibrillation to determine whether anticoagulation may reduce stroke risk in this population. Minor study limitations included the use of an administrative dataset, in which diagnosis information may be incomplete or erroneous, and the potential limited generalizability of the study, given differences in the makeup of the Danish study population compared with other populations.

Applications for Clinical Practice

The study explores the use of clinical predication rule beyond the condition for which it is developed and found that particularly in high-risk groups, risk scores still predicted adverse events, albeit modestly. For clinicians, it highlights both the utility of the risk scores and the current gap in knowledge about stroke prevention in the highest-risk group of patients without atrial fibrillation. Further studies are needed to determine if anticoagulation therapy applies to this high-risk group for stroke prevention.

 —William W. Hung, MD, MPH

Study Overview

Objective. To determine if CHA2DS2-VASc score, a score commonly used to assess risk of cerebrovascular events among adults with atrial fibrillation, predicts ischemic stroke, thromboembolism, and death in a cohort of patients with heart failure with and without atrial fibrillation.

Design. Prospective cohort study.

Setting and participants. Patients in Denmark aged 50 years or older discharged with a primary diagnosis of incident heart failure between 1 Jan 2000 and 31 December 2012. Patients with atrial fibrillation were identified by a hospital diagnosis of atrial fibrillation or atrial flutter from 1994 onwards. The study excluded patients treated with vitamin K antagonist within 6 months prior to heart failure diagnosis and patients with a diagnosis of cancer or chronic obstructive pulmonary disease. The study utilized 3 national Danish registries: the National Patient Registry (which records all hospital admissions and diagnoses using ICD-10), the National Prescription Registry (prescription data), and the Civil Registry System (demographics and vital statistics). The registries were linked and have been well validated.

Main outcome measure. The primary outcome measure was defined as a hospital diagnosis of ischemic stroke or thromboembolic events, transient ischemic attack, systemic embolism, pulmonary embolism or myocardial infarction within 1 year after heart failure diagnosis. A secondary outcome measure was all-cause death at 1 year.

Analysis. Patients were risk stratified using the CHA2DS2-VASc score. Patients were given 1 point for congestive heart failure, hypertension, age 65 to 74 years, diabetes mellitus, vascular disease, and female sex and 2 points for age 75 years or older and previous thromboembolic events. The authors conducted a time-to-event analysis to examine the relationship between CHA2DS2-VASc score and the risk of ischemic stroke, thromboembolic event, and death separately among those with atrial fibrillation and without. Patients were censored if they began anticoagulation therapy during follow-up. The properties of CHA2DS2-VASc score in predicting the risk of outcomes were quantified using C statistics. Multiple sensitivity analyses were conducted to account for patients who had a diagnosis of atrial fibrillation shortly after diagnosis of heart failure, to include patients with chronic obstructive pulmonary disease, and split sample analysis by date of heart failure diagnosis was conducted.

Main results. A total of 42,987 patients with incident heart failure during 2000–2012 were included in the cohort, with 21.9% of these having atrial fibrillation at baseline. The median follow-up period was 1.8 years. For patients with heart failure with or without a diagnosis of atrial fibrillation, the 1-year absolute risk for all outcomes were high and increased with increasing CHA2DS2-VASc score. For ischemic stroke and death, absolute risks were higher among patient with heart failure and atrial fibrillation when compared with patients without atrial fibrillation. At high CHA2DS2-VASc score, the risk of thromboembolism was higher among patients without atrial fibrillation when compared with those with atrial fibrillation. CHA2DS2-VASc score predicted the end point of ischemic stroke at 1 and 5 years modestly with C statistics 0.67 and 0.69 among those without atrial fibrillation and 0.64 and 0.71 among those with atrial fibrillation. The negative predictive value for all events at 1 year was around 90% when using a cutoff score of 1 for patients without atrial fibrillation, but only around 75% at 5 years.

Conclusions. Although the CHA2DS2-VASc score was developed to predict ischemic stroke among patients with atrial fibrillation, it also has modest predictive accuracy when applied to patients with heart failure without atrial fibrillation. Among patients with heart failure with a high CHA2DS2-VASc score, the risks of all adverse outcomes were high regardless of whether concomitant atrial fibrillation was present, and the risk of thromboembolism was higher among those without atrial fibrillation than those with concomitant atrial fibrillation. Because of the modest predictive accuracy, the clinical utility of CHA2DS2-VASc among patients with heart failure needs to be further determined.

Commentary

Clinical prediction rules are increasingly relied upon in clinical setting to drive medical decision making, allowing clinicians to weigh risks and benefits of interventions in a concrete, evidence-based manner [1]. The CHA2DS2-VASc score, endorsed in guidelines for assessing risk of stroke among patients with atrial fibrillation, is widely used in clinical practice [2,3] in helping make decisions about treatment, such as use of anticoagulation. The use of the clinical prediction rule for patients with heart failure but without atrial fibrillation is a novel application of the widely used rule. The rationale is that the CHA2DS2-VASc score includes within it a cluster of stroke risk factors that increases risk of stroke whether atrial fibrillation is present or not and thus perhaps capture stroke risk beyond whether a patient has atrial fibrillation [4]. The authors selected a patient group with high rate of mortality—those with incident heart failure—to evaluate the hypothesis that the CHA2DS2-VASc score could predict stroke outcomes in heart failure patients without atrial fibrillation in a manner similar to that in atrial fibrillation populations, and that at high CHA2DS2-VASc scores, the risk for stroke would be comparable among heart failure patients.

What the authors found is that the scoring algorithm was able to predict stroke occurrence modestly whether or not atrial fibrillation was present, and that stroke risk was high among those at the highest scores regardless of whether patients had atrial fibrillation. These findings underscore the potential use of the scoring algorithm beyond the population with atrial fibrillation, and also highlighted the need for further research in the highest risk group of heart failure patients without atrial fibrillation to determine whether anticoagulation may reduce stroke risk in this population. Minor study limitations included the use of an administrative dataset, in which diagnosis information may be incomplete or erroneous, and the potential limited generalizability of the study, given differences in the makeup of the Danish study population compared with other populations.

Applications for Clinical Practice

The study explores the use of clinical predication rule beyond the condition for which it is developed and found that particularly in high-risk groups, risk scores still predicted adverse events, albeit modestly. For clinicians, it highlights both the utility of the risk scores and the current gap in knowledge about stroke prevention in the highest-risk group of patients without atrial fibrillation. Further studies are needed to determine if anticoagulation therapy applies to this high-risk group for stroke prevention.

 —William W. Hung, MD, MPH

References

1. McGinn TG, Guyatt GH, Wyer PC, et al. Users’ guides to the medical literature: XXII: how to use articles about clinical decision rules. Evidence-Based Medicine Working Group. JAMA 2000;284:79–84.

2. January CT, Wann LS, Alpert JS, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Heart Rhythm Society. J Am Coll Cardiol 2014;64:e1–e76.

3. Camm AJ, Lip GY, De Caterina R, et al; ESC Committee for Practice Guidelines. 2012 Focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation. Eur Heart J 2012;33:2719–47.

4. Mitchell LB, Southern DA, Galbraith D, et al; APPROACH Investigators. Prediction of stroke or TIA in patients without atrial fibrillation using CHADS2 and CHA2DS2-VASc scores. Heart 2014;100:1524–30.

References

1. McGinn TG, Guyatt GH, Wyer PC, et al. Users’ guides to the medical literature: XXII: how to use articles about clinical decision rules. Evidence-Based Medicine Working Group. JAMA 2000;284:79–84.

2. January CT, Wann LS, Alpert JS, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Heart Rhythm Society. J Am Coll Cardiol 2014;64:e1–e76.

3. Camm AJ, Lip GY, De Caterina R, et al; ESC Committee for Practice Guidelines. 2012 Focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation. Eur Heart J 2012;33:2719–47.

4. Mitchell LB, Southern DA, Galbraith D, et al; APPROACH Investigators. Prediction of stroke or TIA in patients without atrial fibrillation using CHADS2 and CHA2DS2-VASc scores. Heart 2014;100:1524–30.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Publications
Topics
Article Type
Display Headline
CHA2DS2-VASc Score Modestly Predicts Ischemic Stroke, Thromboembolic Events, and Death in Patients with Heart Failure Without Atrial Fibrillation
Display Headline
CHA2DS2-VASc Score Modestly Predicts Ischemic Stroke, Thromboembolic Events, and Death in Patients with Heart Failure Without Atrial Fibrillation
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

A Novel Emergency Department Surge Protocol: Implementation of a Targeted Response Plan

Article Type
Changed
Thu, 08/29/2019 - 09:01
Display Headline
A Novel Emergency Department Surge Protocol: Implementation of a Targeted Response Plan

From the Ottawa Hospital, Ottawa, ON Canada.

 

Abstract

  • Objective: Fluctuations in emergency department (ED) visits occur frequently, and traditional global measures of ED crowding do not allow for targeted responses to address root causes. We sought to develop, implement, and evaluate a novel ED surge protocol based on the input-throughput-output (ITO) model of ED flow.
  • Methods: This initiative took place at a tertiary care academic teaching hospital. An inter-professional group developed and validated metrics for various levels of surge in relation to the ITO model, measured every 2 hours, which directly linked to specific actions targeting root causes within those components. Main outcome measure was defined as the frequency of sustained (≥ 6 hours) high surges, a marker of inability to respond effectively.
  • Results: During the 6-month study period, average daily hospital occupancy levels rose above 100% (pre 99.5%, post 101.2%; P = 0.01) and frequency of high surges in the output component increased (pre 7.7%, post 10.8%; P = 0.002). Despite this, frequency of sustained high surges remained stable for input (pre 4.5%, post 0.0%; P = 0.13) and throughput (pre 3.5%, post 2.7%; P = 0.54), while improvement in output reached statistical significance (pre 7.7%, post 2.0%, P = 0.01).
  • Conclusions: The ED surge protocol led to effective containment of daily high surges despite significant increase in hospital occupancy levels. This is the first study to describe an ED surge plan capable of identifying within which ITO component surge is happening and linking actions to address specific causes. We believe this protocol can be adapted for any ED.

 

Emergency department (ED) crowding has been defined as “a situation where the demand for emergency services exceeds the ability to provide care in a reasonable amount of time” [1]. Crowding is an increasingly common occurrence in hospital-based EDs, and overcrowding of EDs has been shown to adversely affect the delivery of emergency care and results in increased patient morbidity and mortality [2,3]. Furthermore, the nature of medical emergencies dictates that rapid daily changes (or surges) in patient volume and acuity occur frequently and unpredictably, contributing to the difficulty of matching resources to demands. Accurate understanding and continuous measurement of where bottlenecks may be occurring within an ED are critical to an effective response to ED surges.

While it is now widely accepted that hospital inpatient overcapacity greatly contributes to crowding in the ED, there are many other factors related to overcrowding that are within the control of the ED. A conceptual model proposed by Asplin partitions ED crowding into 3 interdependent components: input, throughput, and output (Figure 1); this model has recently been accepted as the standard theoretical model for discussing patient flow through the ED by national professional groups such as the Canadian Association of Emergency Physicians [4,5]. Surges can arise from rapid demands in any of these areas, resulting in overall net ED crowding; however, depending on the model component affected, different approaches to solution design may be required. For example, a sudden massive influx of new patients arriving to an ED would cause a surge in the “input” aspect of the model, and response plans should address the issue with actions such as increasing triage capacity, or perhaps calling in additional physician resources in anticipation of looming “throughput” surge. Activating inpatient hospital responses may be premature and ineffective, wasting valuable resources that can be utilized elsewhere. In contrast, ED surges related to “output” factors may be best tackled with hospital-wide responses and resource reallocation.

Many of the widely used measurement tools for overcrowding produce one final overall net value on a one-dimensional scale, failing to capture the complexity of the root causes of surges. For example, the National ED Overcrowding Study (NEDOCS) scoring system, validated at various centers and widely used and studied [5–7] utilizes a number of institutional and situational variables to calculate a final NEDOCS score, which translates to “Not Busy,” “Busy,” “Overcrowded,” “Severely Overcrowded,” or “Dangerously Overcrowded” as a global state. Other published scoring systems such as the Emergency Department Work Index (EDWIN), while performing well in comparison to subjective impressions of physicians and nurses, also suffers from computation of a single final score, which makes it difficult to tie to specific actions or solutions [8]. Other surrogate markers quantifying ED crowding have also been used, such as left-without-being-seen rates, ambulance diversions, and total number of boarded patients in the ED; yet they too only measure consequences of crowding and provide little diagnostic information on when and where specific ED surges are actually happening throughout the day [9].

Responding to ED Surges

An effective surge plan should ensure the delivery of safe, effective care in response to various input/throughput/output surges in a coordinated and standardized manner. The ideal ED surge plan should include (1) a prospective continuous tool/method that accurately gauges the surge level (based on objective measures) in various components of the Input-Throughput-Output model of the department, (2) standardized targeted actions that are tied to specific triggers identified within that model to ensure effective solutions, and (3) built-in contingency plans for escalation in the face of sustained/worsening surges. Few studies have been published describing successful implementation of ED surge protocols, with the majority being linked to global ED crowding measures such as the NEDOCS score [10]. As a result, it is difficult to tease out the specific targeted actions that are most effective in dealing with the root causes of a surge.

Local Problem

Prior to the quality improvement initiative we describe below, the Ottawa Hospital ED had no formal process or method of measuring daily surges nor any standardized action plan to respond effectively to those surges. The state of “busy-ness” was often defined by gut feelings of frontline workers, which was quite variable depending on the individuals in charge of departmental patient flow. Often, actions to try and mitigate rising ED surges were triggered too late, resulting in consistent gridlock in the ED that lasted many hours. Several near-misses as well as actual critical incidences had occurred as a result of ineffective management of ED surges, and the authors of this initiative were tasked by senior hospital leadership with designing and implementing a novel solution.

Objectives

We describe our approach to the development, implementation, and evaluation of a novel ED surge protocol at a tertiary care academic hospital based on the principles cited above. Specifically, we sought to:

  • define various levels of ED surge and to provide a common language for better communication between all stakeholders
  • incorporate the validated Input-Throughput-Output model of ED flow to provide a conceptual framework for measuring surges in real-time and developing targeted action plans
  • standardize ED and organizational responses to various ED surges based on identified bottlenecks
  • measure and evaluate the effectiveness of the ED surge plan implementation
  • continuously modify and improve the ED surge protocol using quality improvement strategies

Methods

Setting

The Ottawa Hospital is an academic tertiary care center with 3 campuses (Civic, General, and Riverside), with the ED providing coverage at 2 physical emergency rooms. The hospital is the regional trauma center as well as referral destination for many subspecialties such as cardiac, vascular and neurosurgical emergencies. This 1163-bed facility handles over 160,000 emergency visits a year, over 1 million ambulatory care visits a year, and roughly 35,000 surgical cases annually. The ED is staffed by 78 staff physicians, approximately 250 registered nurses (RNs), and ~50 emergency medicine residents/trainees.

The EDs are supported by a computerized tracking system that provides real-time metrics. This information is displayed by ED-specific geographical area on electronic whiteboards, which can be accessed on overhead monitors, desktop computers, and personal iPads. Information available to ED physicians and staff at any time includes individual-level data such as location, demographics, Canadian Triage Acuity Score (CTAS), and presenting complaint as well as departmental-level data such as patient volumes, wait times, length of stay (LOS), pending/completed diagnostics, consultation status and final dispositions.

According to the policy and standard operating procedures that govern research at the Ottawa Hospital Research Institute, this work met criteria for quality improvement activities exempt from ethics review.

Intervention

A working group comprising a project manager, ED physicians, managers, educators, care facilitators, and inpatient flow managers developed specific criteria defining various levels of surge for each component of the Input-Throughput-Output model (Figure 2). Since there is no universally accepted definition of surge published in the literature, the criteria were derived from consensus of local expert/leadership opinion as starting points for this project, and refined by polling frontline workers (care facilitators) on their perceptions of what constitute ED surges. The ED care facilitator’s position is held by approximately 10 senior nursing staff who are operational experts of ED flow and management, and has no specific bedside nursing duties assigned. Its main mandate is to manage overall flow of the department including but not exclusive to communication with inpatient units and local EMS dispatch, liaising with ED physicians to facilitate efficient use of limited monitored beds and other resources, and reassigning nursing resources around the department as needed.

Over a 4-day period care facilitators were polled on an hourly basis to determine what factors were important to the in determining how “busy” they perceived the ED to be. These factors included but were not limited to: total number of patients waiting to be seen; time to physician initial assessment; number of monitored beds available; and number of admitted patients boarded in the ED. Analysis was done to prospectively compare their perception of surge levels to the proposed Surge Plan metrics, and to ensure that the individual criteria for each level was practically meaningful and accurate.

Next, a set of standardized action and response plans were developed and agreed upon that tied specifically to a corresponding component of the different measured ED surge levels (these action plans are detailed in an online Appendix and are also available from the author). The fundamental guiding principles behind the development of each action item was that it should (1) target underlying causes - in a standardized way - specific to the relevant Input-Throughput-Output surge, (2) provide escalating level of effectiveness for each corresponding escalation in the surge level (eg, contacting a staff physician directly for a disposition decision for patents consulted in the ED, if the resident trainees have failed to do so in a timely manner), and (3) coordinate actions by various stakeholders in a planned and organized manner. Practically, the standardized targeted actions span across 5 different roles, which were explicitly listed on action sheets for care facilitators, clinical managers, patient flow managers, evening and night coordinators, and clinical directors.

Stakeholder Engagement

Our working group identified 9 internal ED stakeholder groups, 13 internal hospital-wide stakeholder groups, and 4 external stakeholder groups (Table 1). Prior to implementation, multiple stakeholder meetings were held with all of the groups to determine the feasibility of the plan, validate the proposed metrics, and establish concrete actions to be taken by each stakeholder group in response to various surge levels. Examples of specific actions include shifting nursing resources between different areas of the ED, alerting inpatient services of ED surge levels, extra overtime staffing for hospital support staff, escalating discharges on the wards, consideration for ambulance diversion, and calling in extra ED physicians. Buy-in from different hospital stakeholders were further reinforced by senior leadership and management. Once the overall ED surge protocol was approved by relevant stakeholders and senior hospital management, individualized standard worksheets were developed (see Appendix) and training provided to relevant stakeholders.

Implementation and Continuous Improvement

Given the complexity of the ED- and hospital-wide nature of the surge protocol, implementation was done over multiple phases and Plan-Do-Study-Act (PDSA) improvement cycles:

Phase I (Apr 2013 - Jun 2013)

The initial proposed ED surge level metrics were measured at a single ED campus. Care facilitators were trained and asked to measure surge levels in the ED every 2 hours. This served as a testing period to gauge the sensitivity and reliability of our proposed surge level metrics, and no actual action items were triggered during this period. Stakeholder meetings were held to determine feasibility of the plan, validate the proposed metrics, and develop “standard work” action plans for each stakeholder group in response to the metrics. This first phase also allowed care facilitators to objectively reflect on ED surge patterns throughout the day, and provided everyone in the ED team a frequent global snapshot of how “busy” the department was at any time. Finally, surge level data during this phase confirmed previous suspicions that the Output component was the biggest driver behind overall ED surge level.

Throughout this phase, the ED clinical manager recorded all the usual actions taken in response to the different level of surges as felt appropriate by the individual care facilitator on duty. The variety of actions and types of escalations were collected and fed back to weekly workgroup meetings to help further refine crafting of standardized action plans for implementation of the surge protocol.

Phase II (June - Aug 2013)

An initial trial of a limited ED surge protocol was rolled out at both ED campuses, with actual action items being triggered in response to specific surge level metrics. The main focus of this PDSA cycle was to collect data on how the care facilitator groups at the 2 campuses utilized the surge protocol, as well as feedback on usability, barriers, and effectiveness. Regular audits were performed to ensure surge measurement and compliance rates. Educational sessions were provided regarding rationale and purpose of the plan so that all team members had a better understanding of ED surges. Frequent meetings with stakeholders to share updates continued throughout Phase II, allowing further engagement as well as fine-tuning of stakeholder action plans based on real-time experiences.

Phase III (Aug 2013 - Dec 2013)

The next phase of implementation expanded beyond the ED and included the hospital’s off-hours and off-service management group. This in effect was the official corporate roll-out of the ED surge protocol including full action plans for all stakeholders, including off-service clinical administrators, inpatient flow managers, and the director of emergency and critical care. Regular audits were performed to ensure compliance of measurement every 2 hours as well as performance of specified action items related to each surge level, with the actual surge level measurement completion rates of 98%.

Data Collection and Analysis

Over the study period April 2013 to December 2013 at the Civic campus and June 2013 to December 2013 at the General campus, ED surge levels were measured every 2 hours by the care facilitators and manually recorded in standardized ED surge protocol booklets. These were subsequently entered into Excel database for tracking and data analysis. Patient volumes and hospital occupancy levels were recorded daily. Perceptions of the primary users of the surge protocol (ie, care facilitators) were obtained via standardized interviews and polls. We present descriptive statistics and statistical process control (SPC) charts. Chi-squared test was performed for comparison of pre- and post-intervention frequencies of outcome measures.

Outcome Measures

The main outcome measure was the frequency of sustained (≥ 6 hours) high surges, a marker of inability to respond effectively. High surges were defined as Moderate and Major surges combined. Our expert group consensus was that combinging the Moderate and Major surge categories to represent “high” surge was reasonable since they both require mobilizing resources on a hospital-wide level, and failure to improve despite 6 continuous hours of actively trying to address such high surges would lead to significantly higher risk for quality of care and patient safety issues.

Secondary outcomes include overall frequency of reaching high surge levels at various components of the Input-Throughput-Output ED flow model, hospital occupancy levels, and care facilitators’ perceptions on workload and overall effectiveness of the surge protocol.

Results

ED Flow

Table 2 presents the summary statistics for both campuses comparing the pre- and post-implementation time periods. During the study period, the average number of daily ED visits decreased slightly by 10 patients per day (pre 439.4, post 429.4, P = 0.04), while the average daily hospital occupancy levels steadily rose above 100% (pre 99.5%, post 101.2%, P = 0.01). Despite rising hospital occupancy levels, the proportion of time the ED reached high surge levels decreased for Input (pre 4.4%, post 2.7%, P = 0.01) and Throughput (pre 20.5%, post 18.1%, P = 0.08) components of ED flow after implementation. The frequency of high surges in the Output component did significantly increase (pre 7.7%, post 10.8%, P = 0.002).

Statistical Process Control Charts

Figure 3 shows SPC charts for the different Input-Throughput-Output components of the 2 different ED campuses over the study period. Daily frequency of sustained high surges lasting 6 or more consecutive hours were plotted along with hospital occupancy levels. The number of times data points rose above the upper limit of the SPC chart (ie, above normal expected variation) pre- and post-intervention were used for statistical comparison. Overall for the 2 campuses combined, the frequency of sustained high surges above normal variation remained stable for Input and Throughput (pre 4.5%, post 0.0%, P = 0.13; pre 3.5%, post 2.7% , P = 0.54) components of the ED flow model, respectively. More importantly, the frequency of sustained high surges in the Output component decreased, reaching statistical significance [pre:7.7% vs post:2.0% , P = 0.01], despite a rise in the total number of times the ED reached severe Output surges and overall hospital occupant levels.

Survey of Care Facilitators

The primary users and drivers of the surge protocol—the care facilitator group—felt strongly that the tool was easy to use and that it made a positive difference. 72% felt that the ED surge protocol has increased their workload but 92% felt that it was good for overall flow of the ED. Specific feedback included having a much more standardized language around communicating (and acting on) surges, and a better overall bird’s-eye view of the department.

Discussion

Despite a call for urgent research on implementing solutions targeting daily ED surges (vs. global ED crowding) over a decade ago at the Academic Emergency Medicine 2006 Consensus Conference [12], little work has been published on distinguishing, measuring, and dealing with ED surges. McCarthy et al proposed the rate of patient arrivals to the ED by time of day as a rudimentary definition of surge, although they provided very little specific guidance on what to do with that information in the setting of responding to spikes in surges [13]. Asplin et al described a number of theoretical models to bridge ED census, daily surges, length of stay and quality of care, however they were never validated in real-life scenarios [14]. A systematic review published in 2009 summarizing articles that described theoretical and practical ED surge responses found a large heterogeneity of different proposed models with little standardization and multiple shortcomings [15].

To our knowledge, this study is the first to report on the actual development, implementation, and evaluation of a daily ED surge protocol that utilizes a widely accepted conceptual model of ED flow. Unlike single global measure of ED crowding, our protocol measures frequent surge levels for various Input-Throughput-Output components of the ED, which are tied directly to standardized specific actions to address underlying root causes. Despite continued rise in hospital occupant levels and budgetary restraints, we found a improvement in the number of times the ED actually hit severe surges with the exception of Output, which is expected since this component of the flow model is intimately tied to hospital occupant levels. When severe surges did happen, we were able to deal with them much more effectively and efficiently, resulting in an overall decrease in sustained surges in the ED including the Output component.

Limitations

Similar to other pragmatic quality improvement projects that rely on manual processes, it was difficult to ensure absolute compliance of surge level measurements throughout the study period. As a result, there were occasional missing surge level data at various times of different days. However, we believe these are relatively nonsignificant occurrences that balanced out over the pre- and post-implementation periods. In addition, we did not have the resources to robustly record and confirm completion of specific action items that were activated in response to various surge levels, although we did confirm verbally with frontline workers regularly that those actions were done. Future Plan-Do-Study-Act cycles will focus on explicit measurement of actual completed action items and further refinement of targeted responses to surge. Finally, while we were able to only collect and present data over a relatively short period of evaluation (and thus potentially susceptible to seasonal variations in ED flow), we believe that our data does support the surge protocol’s effectiveness when compared to the robust trend of hospital occupant levels.

Future Directions

This ED surge protocol can be adapted and modified to fit any ED. The specific criteria defining Minor/Moderate/Major surges can be set up as ratios or percentages relative to total number of monitors, beds, etc., available. The principles of linking actions directly to specific triggers within each Input/Throughput/Output category could be translated to fit any-sized organization. Currently in progress is a longer evaluation period and based upon the results as well as individual feedback, necessary adjustments to our definitions, criteria and action items will be considered as part of ongoing quality improvement. The principles of our surge protocol are not limited to the ED, and we will explore its implementation in other hospital departments as well as methods to link them together in alignment with the hospital’s overall corporate strategy in tackling overcrowding.

Conclusion

In summary, implementation of this novel ED surge protocol led to a more effective response and management of high surges, despite significant increase in overall hospital occupancy rates and associated frequency of surges in the Output component of the ED flow model. Our surge measurement tool is capable of identifying within which area of the ED  surges are occurring, and our  ED surge protocol links specific actions to address those specific root causes. We believe this will lead not only to more accurate assessments of overall ED crowding but also to more timely and effective departmental and institutional responses.

 

Corresponding author: Dr. Edmund S.H. Kwok, Dept. of Emergency Medicine, Ottawa Hospital, Civic Campus, 1053 Carling Ave., Ottawa, ON, Canada K1Y 4E9, [email protected].

Financial disclosures: None.

References

1. Bond K. Interventions to reduce overcrowding in emergency departments. [Technology report no 67.4]. Ottawa: Canadian Agency for Drugs and Technologies in Health; 2006.

2. Richardson DB, et al. Increase in patient mortality at 10 days associated with emergency department overcrowding. Med J Aust 2006;184:213–6.

3. Sprivulis PC, et al. The association between hospital overcrowding and mortality among patients admitted via Western Australian emergency departments. Med J Aust 2006; 184:208–12.

4. Asplin BR, Magid DJ, Rhodes KV, et al. A conceptual model of emergency department crowding. Ann Emerg Med 2003; 42:173–80.

5. Affleck A, Parks P, Drummond A, et al. Emergency department overcrowding and access block. CAEP Position Statement. CJEM 2013;15:359–70.

6. Weiss SJ, Derlet R, Arndahl J, et al. Estimating the degree of emergency department overcrowding in academic medical centers: results of the National ED Overcrowding Study (NEDOCS). Acad Emerg Med 2004;11:38–50.

7. Weiss SJ, Ernst AA, Nick TG. Comparison of the National Emergency Department Overcrowding Scale and the Emergency Department Work Index for quantifying emergency department crowding. Acad Emerg Med 2006;13:513–8.

8. Jones SS, Allen TL, Welch SJ. An independent evaluation of four quantitative emergency department crowding scales. Acad Emerg Med 2006;13:1204–11

9. Bernstein SL, Verghese V, Leung W, et al. Development and validation of a new index to measure emergency department crowding. Acad Emerg Med 2003;10:938–42

10. General Accounting Office. Hospital emergency departments–crowded conditions vary among hospitals and communities. GAO-03-460. Washington, DC: US General Accounting Office; 2003.

11. Moseley MG, Dickerson CL, Kasey J, et al. Surge: a organizational response to emergency department overcrowding. J Clin Outcomes Manage 2010;17:453–7.

12. Jenkins JL, O’Connor RE, Cone DC. Differentiating large-scale surge versus daily surge. Acad Emerg Med 2006; 13:1169–72.

13. McCarthy ML, Aronsky D, Kelen GD. The measurement of daily surge and its relevance to disaster preparedness. Acad Emerg Med 2006; 13:1138–41.

14. Asplin BR, Flottemesch TJ, Gordon B. Developing models for patient flow and daily surge capacity research. Acad Emerg Med 2006;13:1109–13.

15. Nager AL, Khanna K. Emergency department surge: models and practical implications. J Trauma 2009; 67(2 Suppl):S96–9.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Topics
Sections

From the Ottawa Hospital, Ottawa, ON Canada.

 

Abstract

  • Objective: Fluctuations in emergency department (ED) visits occur frequently, and traditional global measures of ED crowding do not allow for targeted responses to address root causes. We sought to develop, implement, and evaluate a novel ED surge protocol based on the input-throughput-output (ITO) model of ED flow.
  • Methods: This initiative took place at a tertiary care academic teaching hospital. An inter-professional group developed and validated metrics for various levels of surge in relation to the ITO model, measured every 2 hours, which directly linked to specific actions targeting root causes within those components. Main outcome measure was defined as the frequency of sustained (≥ 6 hours) high surges, a marker of inability to respond effectively.
  • Results: During the 6-month study period, average daily hospital occupancy levels rose above 100% (pre 99.5%, post 101.2%; P = 0.01) and frequency of high surges in the output component increased (pre 7.7%, post 10.8%; P = 0.002). Despite this, frequency of sustained high surges remained stable for input (pre 4.5%, post 0.0%; P = 0.13) and throughput (pre 3.5%, post 2.7%; P = 0.54), while improvement in output reached statistical significance (pre 7.7%, post 2.0%, P = 0.01).
  • Conclusions: The ED surge protocol led to effective containment of daily high surges despite significant increase in hospital occupancy levels. This is the first study to describe an ED surge plan capable of identifying within which ITO component surge is happening and linking actions to address specific causes. We believe this protocol can be adapted for any ED.

 

Emergency department (ED) crowding has been defined as “a situation where the demand for emergency services exceeds the ability to provide care in a reasonable amount of time” [1]. Crowding is an increasingly common occurrence in hospital-based EDs, and overcrowding of EDs has been shown to adversely affect the delivery of emergency care and results in increased patient morbidity and mortality [2,3]. Furthermore, the nature of medical emergencies dictates that rapid daily changes (or surges) in patient volume and acuity occur frequently and unpredictably, contributing to the difficulty of matching resources to demands. Accurate understanding and continuous measurement of where bottlenecks may be occurring within an ED are critical to an effective response to ED surges.

While it is now widely accepted that hospital inpatient overcapacity greatly contributes to crowding in the ED, there are many other factors related to overcrowding that are within the control of the ED. A conceptual model proposed by Asplin partitions ED crowding into 3 interdependent components: input, throughput, and output (Figure 1); this model has recently been accepted as the standard theoretical model for discussing patient flow through the ED by national professional groups such as the Canadian Association of Emergency Physicians [4,5]. Surges can arise from rapid demands in any of these areas, resulting in overall net ED crowding; however, depending on the model component affected, different approaches to solution design may be required. For example, a sudden massive influx of new patients arriving to an ED would cause a surge in the “input” aspect of the model, and response plans should address the issue with actions such as increasing triage capacity, or perhaps calling in additional physician resources in anticipation of looming “throughput” surge. Activating inpatient hospital responses may be premature and ineffective, wasting valuable resources that can be utilized elsewhere. In contrast, ED surges related to “output” factors may be best tackled with hospital-wide responses and resource reallocation.

Many of the widely used measurement tools for overcrowding produce one final overall net value on a one-dimensional scale, failing to capture the complexity of the root causes of surges. For example, the National ED Overcrowding Study (NEDOCS) scoring system, validated at various centers and widely used and studied [5–7] utilizes a number of institutional and situational variables to calculate a final NEDOCS score, which translates to “Not Busy,” “Busy,” “Overcrowded,” “Severely Overcrowded,” or “Dangerously Overcrowded” as a global state. Other published scoring systems such as the Emergency Department Work Index (EDWIN), while performing well in comparison to subjective impressions of physicians and nurses, also suffers from computation of a single final score, which makes it difficult to tie to specific actions or solutions [8]. Other surrogate markers quantifying ED crowding have also been used, such as left-without-being-seen rates, ambulance diversions, and total number of boarded patients in the ED; yet they too only measure consequences of crowding and provide little diagnostic information on when and where specific ED surges are actually happening throughout the day [9].

Responding to ED Surges

An effective surge plan should ensure the delivery of safe, effective care in response to various input/throughput/output surges in a coordinated and standardized manner. The ideal ED surge plan should include (1) a prospective continuous tool/method that accurately gauges the surge level (based on objective measures) in various components of the Input-Throughput-Output model of the department, (2) standardized targeted actions that are tied to specific triggers identified within that model to ensure effective solutions, and (3) built-in contingency plans for escalation in the face of sustained/worsening surges. Few studies have been published describing successful implementation of ED surge protocols, with the majority being linked to global ED crowding measures such as the NEDOCS score [10]. As a result, it is difficult to tease out the specific targeted actions that are most effective in dealing with the root causes of a surge.

Local Problem

Prior to the quality improvement initiative we describe below, the Ottawa Hospital ED had no formal process or method of measuring daily surges nor any standardized action plan to respond effectively to those surges. The state of “busy-ness” was often defined by gut feelings of frontline workers, which was quite variable depending on the individuals in charge of departmental patient flow. Often, actions to try and mitigate rising ED surges were triggered too late, resulting in consistent gridlock in the ED that lasted many hours. Several near-misses as well as actual critical incidences had occurred as a result of ineffective management of ED surges, and the authors of this initiative were tasked by senior hospital leadership with designing and implementing a novel solution.

Objectives

We describe our approach to the development, implementation, and evaluation of a novel ED surge protocol at a tertiary care academic hospital based on the principles cited above. Specifically, we sought to:

  • define various levels of ED surge and to provide a common language for better communication between all stakeholders
  • incorporate the validated Input-Throughput-Output model of ED flow to provide a conceptual framework for measuring surges in real-time and developing targeted action plans
  • standardize ED and organizational responses to various ED surges based on identified bottlenecks
  • measure and evaluate the effectiveness of the ED surge plan implementation
  • continuously modify and improve the ED surge protocol using quality improvement strategies

Methods

Setting

The Ottawa Hospital is an academic tertiary care center with 3 campuses (Civic, General, and Riverside), with the ED providing coverage at 2 physical emergency rooms. The hospital is the regional trauma center as well as referral destination for many subspecialties such as cardiac, vascular and neurosurgical emergencies. This 1163-bed facility handles over 160,000 emergency visits a year, over 1 million ambulatory care visits a year, and roughly 35,000 surgical cases annually. The ED is staffed by 78 staff physicians, approximately 250 registered nurses (RNs), and ~50 emergency medicine residents/trainees.

The EDs are supported by a computerized tracking system that provides real-time metrics. This information is displayed by ED-specific geographical area on electronic whiteboards, which can be accessed on overhead monitors, desktop computers, and personal iPads. Information available to ED physicians and staff at any time includes individual-level data such as location, demographics, Canadian Triage Acuity Score (CTAS), and presenting complaint as well as departmental-level data such as patient volumes, wait times, length of stay (LOS), pending/completed diagnostics, consultation status and final dispositions.

According to the policy and standard operating procedures that govern research at the Ottawa Hospital Research Institute, this work met criteria for quality improvement activities exempt from ethics review.

Intervention

A working group comprising a project manager, ED physicians, managers, educators, care facilitators, and inpatient flow managers developed specific criteria defining various levels of surge for each component of the Input-Throughput-Output model (Figure 2). Since there is no universally accepted definition of surge published in the literature, the criteria were derived from consensus of local expert/leadership opinion as starting points for this project, and refined by polling frontline workers (care facilitators) on their perceptions of what constitute ED surges. The ED care facilitator’s position is held by approximately 10 senior nursing staff who are operational experts of ED flow and management, and has no specific bedside nursing duties assigned. Its main mandate is to manage overall flow of the department including but not exclusive to communication with inpatient units and local EMS dispatch, liaising with ED physicians to facilitate efficient use of limited monitored beds and other resources, and reassigning nursing resources around the department as needed.

Over a 4-day period care facilitators were polled on an hourly basis to determine what factors were important to the in determining how “busy” they perceived the ED to be. These factors included but were not limited to: total number of patients waiting to be seen; time to physician initial assessment; number of monitored beds available; and number of admitted patients boarded in the ED. Analysis was done to prospectively compare their perception of surge levels to the proposed Surge Plan metrics, and to ensure that the individual criteria for each level was practically meaningful and accurate.

Next, a set of standardized action and response plans were developed and agreed upon that tied specifically to a corresponding component of the different measured ED surge levels (these action plans are detailed in an online Appendix and are also available from the author). The fundamental guiding principles behind the development of each action item was that it should (1) target underlying causes - in a standardized way - specific to the relevant Input-Throughput-Output surge, (2) provide escalating level of effectiveness for each corresponding escalation in the surge level (eg, contacting a staff physician directly for a disposition decision for patents consulted in the ED, if the resident trainees have failed to do so in a timely manner), and (3) coordinate actions by various stakeholders in a planned and organized manner. Practically, the standardized targeted actions span across 5 different roles, which were explicitly listed on action sheets for care facilitators, clinical managers, patient flow managers, evening and night coordinators, and clinical directors.

Stakeholder Engagement

Our working group identified 9 internal ED stakeholder groups, 13 internal hospital-wide stakeholder groups, and 4 external stakeholder groups (Table 1). Prior to implementation, multiple stakeholder meetings were held with all of the groups to determine the feasibility of the plan, validate the proposed metrics, and establish concrete actions to be taken by each stakeholder group in response to various surge levels. Examples of specific actions include shifting nursing resources between different areas of the ED, alerting inpatient services of ED surge levels, extra overtime staffing for hospital support staff, escalating discharges on the wards, consideration for ambulance diversion, and calling in extra ED physicians. Buy-in from different hospital stakeholders were further reinforced by senior leadership and management. Once the overall ED surge protocol was approved by relevant stakeholders and senior hospital management, individualized standard worksheets were developed (see Appendix) and training provided to relevant stakeholders.

Implementation and Continuous Improvement

Given the complexity of the ED- and hospital-wide nature of the surge protocol, implementation was done over multiple phases and Plan-Do-Study-Act (PDSA) improvement cycles:

Phase I (Apr 2013 - Jun 2013)

The initial proposed ED surge level metrics were measured at a single ED campus. Care facilitators were trained and asked to measure surge levels in the ED every 2 hours. This served as a testing period to gauge the sensitivity and reliability of our proposed surge level metrics, and no actual action items were triggered during this period. Stakeholder meetings were held to determine feasibility of the plan, validate the proposed metrics, and develop “standard work” action plans for each stakeholder group in response to the metrics. This first phase also allowed care facilitators to objectively reflect on ED surge patterns throughout the day, and provided everyone in the ED team a frequent global snapshot of how “busy” the department was at any time. Finally, surge level data during this phase confirmed previous suspicions that the Output component was the biggest driver behind overall ED surge level.

Throughout this phase, the ED clinical manager recorded all the usual actions taken in response to the different level of surges as felt appropriate by the individual care facilitator on duty. The variety of actions and types of escalations were collected and fed back to weekly workgroup meetings to help further refine crafting of standardized action plans for implementation of the surge protocol.

Phase II (June - Aug 2013)

An initial trial of a limited ED surge protocol was rolled out at both ED campuses, with actual action items being triggered in response to specific surge level metrics. The main focus of this PDSA cycle was to collect data on how the care facilitator groups at the 2 campuses utilized the surge protocol, as well as feedback on usability, barriers, and effectiveness. Regular audits were performed to ensure surge measurement and compliance rates. Educational sessions were provided regarding rationale and purpose of the plan so that all team members had a better understanding of ED surges. Frequent meetings with stakeholders to share updates continued throughout Phase II, allowing further engagement as well as fine-tuning of stakeholder action plans based on real-time experiences.

Phase III (Aug 2013 - Dec 2013)

The next phase of implementation expanded beyond the ED and included the hospital’s off-hours and off-service management group. This in effect was the official corporate roll-out of the ED surge protocol including full action plans for all stakeholders, including off-service clinical administrators, inpatient flow managers, and the director of emergency and critical care. Regular audits were performed to ensure compliance of measurement every 2 hours as well as performance of specified action items related to each surge level, with the actual surge level measurement completion rates of 98%.

Data Collection and Analysis

Over the study period April 2013 to December 2013 at the Civic campus and June 2013 to December 2013 at the General campus, ED surge levels were measured every 2 hours by the care facilitators and manually recorded in standardized ED surge protocol booklets. These were subsequently entered into Excel database for tracking and data analysis. Patient volumes and hospital occupancy levels were recorded daily. Perceptions of the primary users of the surge protocol (ie, care facilitators) were obtained via standardized interviews and polls. We present descriptive statistics and statistical process control (SPC) charts. Chi-squared test was performed for comparison of pre- and post-intervention frequencies of outcome measures.

Outcome Measures

The main outcome measure was the frequency of sustained (≥ 6 hours) high surges, a marker of inability to respond effectively. High surges were defined as Moderate and Major surges combined. Our expert group consensus was that combinging the Moderate and Major surge categories to represent “high” surge was reasonable since they both require mobilizing resources on a hospital-wide level, and failure to improve despite 6 continuous hours of actively trying to address such high surges would lead to significantly higher risk for quality of care and patient safety issues.

Secondary outcomes include overall frequency of reaching high surge levels at various components of the Input-Throughput-Output ED flow model, hospital occupancy levels, and care facilitators’ perceptions on workload and overall effectiveness of the surge protocol.

Results

ED Flow

Table 2 presents the summary statistics for both campuses comparing the pre- and post-implementation time periods. During the study period, the average number of daily ED visits decreased slightly by 10 patients per day (pre 439.4, post 429.4, P = 0.04), while the average daily hospital occupancy levels steadily rose above 100% (pre 99.5%, post 101.2%, P = 0.01). Despite rising hospital occupancy levels, the proportion of time the ED reached high surge levels decreased for Input (pre 4.4%, post 2.7%, P = 0.01) and Throughput (pre 20.5%, post 18.1%, P = 0.08) components of ED flow after implementation. The frequency of high surges in the Output component did significantly increase (pre 7.7%, post 10.8%, P = 0.002).

Statistical Process Control Charts

Figure 3 shows SPC charts for the different Input-Throughput-Output components of the 2 different ED campuses over the study period. Daily frequency of sustained high surges lasting 6 or more consecutive hours were plotted along with hospital occupancy levels. The number of times data points rose above the upper limit of the SPC chart (ie, above normal expected variation) pre- and post-intervention were used for statistical comparison. Overall for the 2 campuses combined, the frequency of sustained high surges above normal variation remained stable for Input and Throughput (pre 4.5%, post 0.0%, P = 0.13; pre 3.5%, post 2.7% , P = 0.54) components of the ED flow model, respectively. More importantly, the frequency of sustained high surges in the Output component decreased, reaching statistical significance [pre:7.7% vs post:2.0% , P = 0.01], despite a rise in the total number of times the ED reached severe Output surges and overall hospital occupant levels.

Survey of Care Facilitators

The primary users and drivers of the surge protocol—the care facilitator group—felt strongly that the tool was easy to use and that it made a positive difference. 72% felt that the ED surge protocol has increased their workload but 92% felt that it was good for overall flow of the ED. Specific feedback included having a much more standardized language around communicating (and acting on) surges, and a better overall bird’s-eye view of the department.

Discussion

Despite a call for urgent research on implementing solutions targeting daily ED surges (vs. global ED crowding) over a decade ago at the Academic Emergency Medicine 2006 Consensus Conference [12], little work has been published on distinguishing, measuring, and dealing with ED surges. McCarthy et al proposed the rate of patient arrivals to the ED by time of day as a rudimentary definition of surge, although they provided very little specific guidance on what to do with that information in the setting of responding to spikes in surges [13]. Asplin et al described a number of theoretical models to bridge ED census, daily surges, length of stay and quality of care, however they were never validated in real-life scenarios [14]. A systematic review published in 2009 summarizing articles that described theoretical and practical ED surge responses found a large heterogeneity of different proposed models with little standardization and multiple shortcomings [15].

To our knowledge, this study is the first to report on the actual development, implementation, and evaluation of a daily ED surge protocol that utilizes a widely accepted conceptual model of ED flow. Unlike single global measure of ED crowding, our protocol measures frequent surge levels for various Input-Throughput-Output components of the ED, which are tied directly to standardized specific actions to address underlying root causes. Despite continued rise in hospital occupant levels and budgetary restraints, we found a improvement in the number of times the ED actually hit severe surges with the exception of Output, which is expected since this component of the flow model is intimately tied to hospital occupant levels. When severe surges did happen, we were able to deal with them much more effectively and efficiently, resulting in an overall decrease in sustained surges in the ED including the Output component.

Limitations

Similar to other pragmatic quality improvement projects that rely on manual processes, it was difficult to ensure absolute compliance of surge level measurements throughout the study period. As a result, there were occasional missing surge level data at various times of different days. However, we believe these are relatively nonsignificant occurrences that balanced out over the pre- and post-implementation periods. In addition, we did not have the resources to robustly record and confirm completion of specific action items that were activated in response to various surge levels, although we did confirm verbally with frontline workers regularly that those actions were done. Future Plan-Do-Study-Act cycles will focus on explicit measurement of actual completed action items and further refinement of targeted responses to surge. Finally, while we were able to only collect and present data over a relatively short period of evaluation (and thus potentially susceptible to seasonal variations in ED flow), we believe that our data does support the surge protocol’s effectiveness when compared to the robust trend of hospital occupant levels.

Future Directions

This ED surge protocol can be adapted and modified to fit any ED. The specific criteria defining Minor/Moderate/Major surges can be set up as ratios or percentages relative to total number of monitors, beds, etc., available. The principles of linking actions directly to specific triggers within each Input/Throughput/Output category could be translated to fit any-sized organization. Currently in progress is a longer evaluation period and based upon the results as well as individual feedback, necessary adjustments to our definitions, criteria and action items will be considered as part of ongoing quality improvement. The principles of our surge protocol are not limited to the ED, and we will explore its implementation in other hospital departments as well as methods to link them together in alignment with the hospital’s overall corporate strategy in tackling overcrowding.

Conclusion

In summary, implementation of this novel ED surge protocol led to a more effective response and management of high surges, despite significant increase in overall hospital occupancy rates and associated frequency of surges in the Output component of the ED flow model. Our surge measurement tool is capable of identifying within which area of the ED  surges are occurring, and our  ED surge protocol links specific actions to address those specific root causes. We believe this will lead not only to more accurate assessments of overall ED crowding but also to more timely and effective departmental and institutional responses.

 

Corresponding author: Dr. Edmund S.H. Kwok, Dept. of Emergency Medicine, Ottawa Hospital, Civic Campus, 1053 Carling Ave., Ottawa, ON, Canada K1Y 4E9, [email protected].

Financial disclosures: None.

From the Ottawa Hospital, Ottawa, ON Canada.

 

Abstract

  • Objective: Fluctuations in emergency department (ED) visits occur frequently, and traditional global measures of ED crowding do not allow for targeted responses to address root causes. We sought to develop, implement, and evaluate a novel ED surge protocol based on the input-throughput-output (ITO) model of ED flow.
  • Methods: This initiative took place at a tertiary care academic teaching hospital. An inter-professional group developed and validated metrics for various levels of surge in relation to the ITO model, measured every 2 hours, which directly linked to specific actions targeting root causes within those components. Main outcome measure was defined as the frequency of sustained (≥ 6 hours) high surges, a marker of inability to respond effectively.
  • Results: During the 6-month study period, average daily hospital occupancy levels rose above 100% (pre 99.5%, post 101.2%; P = 0.01) and frequency of high surges in the output component increased (pre 7.7%, post 10.8%; P = 0.002). Despite this, frequency of sustained high surges remained stable for input (pre 4.5%, post 0.0%; P = 0.13) and throughput (pre 3.5%, post 2.7%; P = 0.54), while improvement in output reached statistical significance (pre 7.7%, post 2.0%, P = 0.01).
  • Conclusions: The ED surge protocol led to effective containment of daily high surges despite significant increase in hospital occupancy levels. This is the first study to describe an ED surge plan capable of identifying within which ITO component surge is happening and linking actions to address specific causes. We believe this protocol can be adapted for any ED.

 

Emergency department (ED) crowding has been defined as “a situation where the demand for emergency services exceeds the ability to provide care in a reasonable amount of time” [1]. Crowding is an increasingly common occurrence in hospital-based EDs, and overcrowding of EDs has been shown to adversely affect the delivery of emergency care and results in increased patient morbidity and mortality [2,3]. Furthermore, the nature of medical emergencies dictates that rapid daily changes (or surges) in patient volume and acuity occur frequently and unpredictably, contributing to the difficulty of matching resources to demands. Accurate understanding and continuous measurement of where bottlenecks may be occurring within an ED are critical to an effective response to ED surges.

While it is now widely accepted that hospital inpatient overcapacity greatly contributes to crowding in the ED, there are many other factors related to overcrowding that are within the control of the ED. A conceptual model proposed by Asplin partitions ED crowding into 3 interdependent components: input, throughput, and output (Figure 1); this model has recently been accepted as the standard theoretical model for discussing patient flow through the ED by national professional groups such as the Canadian Association of Emergency Physicians [4,5]. Surges can arise from rapid demands in any of these areas, resulting in overall net ED crowding; however, depending on the model component affected, different approaches to solution design may be required. For example, a sudden massive influx of new patients arriving to an ED would cause a surge in the “input” aspect of the model, and response plans should address the issue with actions such as increasing triage capacity, or perhaps calling in additional physician resources in anticipation of looming “throughput” surge. Activating inpatient hospital responses may be premature and ineffective, wasting valuable resources that can be utilized elsewhere. In contrast, ED surges related to “output” factors may be best tackled with hospital-wide responses and resource reallocation.

Many of the widely used measurement tools for overcrowding produce one final overall net value on a one-dimensional scale, failing to capture the complexity of the root causes of surges. For example, the National ED Overcrowding Study (NEDOCS) scoring system, validated at various centers and widely used and studied [5–7] utilizes a number of institutional and situational variables to calculate a final NEDOCS score, which translates to “Not Busy,” “Busy,” “Overcrowded,” “Severely Overcrowded,” or “Dangerously Overcrowded” as a global state. Other published scoring systems such as the Emergency Department Work Index (EDWIN), while performing well in comparison to subjective impressions of physicians and nurses, also suffers from computation of a single final score, which makes it difficult to tie to specific actions or solutions [8]. Other surrogate markers quantifying ED crowding have also been used, such as left-without-being-seen rates, ambulance diversions, and total number of boarded patients in the ED; yet they too only measure consequences of crowding and provide little diagnostic information on when and where specific ED surges are actually happening throughout the day [9].

Responding to ED Surges

An effective surge plan should ensure the delivery of safe, effective care in response to various input/throughput/output surges in a coordinated and standardized manner. The ideal ED surge plan should include (1) a prospective continuous tool/method that accurately gauges the surge level (based on objective measures) in various components of the Input-Throughput-Output model of the department, (2) standardized targeted actions that are tied to specific triggers identified within that model to ensure effective solutions, and (3) built-in contingency plans for escalation in the face of sustained/worsening surges. Few studies have been published describing successful implementation of ED surge protocols, with the majority being linked to global ED crowding measures such as the NEDOCS score [10]. As a result, it is difficult to tease out the specific targeted actions that are most effective in dealing with the root causes of a surge.

Local Problem

Prior to the quality improvement initiative we describe below, the Ottawa Hospital ED had no formal process or method of measuring daily surges nor any standardized action plan to respond effectively to those surges. The state of “busy-ness” was often defined by gut feelings of frontline workers, which was quite variable depending on the individuals in charge of departmental patient flow. Often, actions to try and mitigate rising ED surges were triggered too late, resulting in consistent gridlock in the ED that lasted many hours. Several near-misses as well as actual critical incidences had occurred as a result of ineffective management of ED surges, and the authors of this initiative were tasked by senior hospital leadership with designing and implementing a novel solution.

Objectives

We describe our approach to the development, implementation, and evaluation of a novel ED surge protocol at a tertiary care academic hospital based on the principles cited above. Specifically, we sought to:

  • define various levels of ED surge and to provide a common language for better communication between all stakeholders
  • incorporate the validated Input-Throughput-Output model of ED flow to provide a conceptual framework for measuring surges in real-time and developing targeted action plans
  • standardize ED and organizational responses to various ED surges based on identified bottlenecks
  • measure and evaluate the effectiveness of the ED surge plan implementation
  • continuously modify and improve the ED surge protocol using quality improvement strategies

Methods

Setting

The Ottawa Hospital is an academic tertiary care center with 3 campuses (Civic, General, and Riverside), with the ED providing coverage at 2 physical emergency rooms. The hospital is the regional trauma center as well as referral destination for many subspecialties such as cardiac, vascular and neurosurgical emergencies. This 1163-bed facility handles over 160,000 emergency visits a year, over 1 million ambulatory care visits a year, and roughly 35,000 surgical cases annually. The ED is staffed by 78 staff physicians, approximately 250 registered nurses (RNs), and ~50 emergency medicine residents/trainees.

The EDs are supported by a computerized tracking system that provides real-time metrics. This information is displayed by ED-specific geographical area on electronic whiteboards, which can be accessed on overhead monitors, desktop computers, and personal iPads. Information available to ED physicians and staff at any time includes individual-level data such as location, demographics, Canadian Triage Acuity Score (CTAS), and presenting complaint as well as departmental-level data such as patient volumes, wait times, length of stay (LOS), pending/completed diagnostics, consultation status and final dispositions.

According to the policy and standard operating procedures that govern research at the Ottawa Hospital Research Institute, this work met criteria for quality improvement activities exempt from ethics review.

Intervention

A working group comprising a project manager, ED physicians, managers, educators, care facilitators, and inpatient flow managers developed specific criteria defining various levels of surge for each component of the Input-Throughput-Output model (Figure 2). Since there is no universally accepted definition of surge published in the literature, the criteria were derived from consensus of local expert/leadership opinion as starting points for this project, and refined by polling frontline workers (care facilitators) on their perceptions of what constitute ED surges. The ED care facilitator’s position is held by approximately 10 senior nursing staff who are operational experts of ED flow and management, and has no specific bedside nursing duties assigned. Its main mandate is to manage overall flow of the department including but not exclusive to communication with inpatient units and local EMS dispatch, liaising with ED physicians to facilitate efficient use of limited monitored beds and other resources, and reassigning nursing resources around the department as needed.

Over a 4-day period care facilitators were polled on an hourly basis to determine what factors were important to the in determining how “busy” they perceived the ED to be. These factors included but were not limited to: total number of patients waiting to be seen; time to physician initial assessment; number of monitored beds available; and number of admitted patients boarded in the ED. Analysis was done to prospectively compare their perception of surge levels to the proposed Surge Plan metrics, and to ensure that the individual criteria for each level was practically meaningful and accurate.

Next, a set of standardized action and response plans were developed and agreed upon that tied specifically to a corresponding component of the different measured ED surge levels (these action plans are detailed in an online Appendix and are also available from the author). The fundamental guiding principles behind the development of each action item was that it should (1) target underlying causes - in a standardized way - specific to the relevant Input-Throughput-Output surge, (2) provide escalating level of effectiveness for each corresponding escalation in the surge level (eg, contacting a staff physician directly for a disposition decision for patents consulted in the ED, if the resident trainees have failed to do so in a timely manner), and (3) coordinate actions by various stakeholders in a planned and organized manner. Practically, the standardized targeted actions span across 5 different roles, which were explicitly listed on action sheets for care facilitators, clinical managers, patient flow managers, evening and night coordinators, and clinical directors.

Stakeholder Engagement

Our working group identified 9 internal ED stakeholder groups, 13 internal hospital-wide stakeholder groups, and 4 external stakeholder groups (Table 1). Prior to implementation, multiple stakeholder meetings were held with all of the groups to determine the feasibility of the plan, validate the proposed metrics, and establish concrete actions to be taken by each stakeholder group in response to various surge levels. Examples of specific actions include shifting nursing resources between different areas of the ED, alerting inpatient services of ED surge levels, extra overtime staffing for hospital support staff, escalating discharges on the wards, consideration for ambulance diversion, and calling in extra ED physicians. Buy-in from different hospital stakeholders were further reinforced by senior leadership and management. Once the overall ED surge protocol was approved by relevant stakeholders and senior hospital management, individualized standard worksheets were developed (see Appendix) and training provided to relevant stakeholders.

Implementation and Continuous Improvement

Given the complexity of the ED- and hospital-wide nature of the surge protocol, implementation was done over multiple phases and Plan-Do-Study-Act (PDSA) improvement cycles:

Phase I (Apr 2013 - Jun 2013)

The initial proposed ED surge level metrics were measured at a single ED campus. Care facilitators were trained and asked to measure surge levels in the ED every 2 hours. This served as a testing period to gauge the sensitivity and reliability of our proposed surge level metrics, and no actual action items were triggered during this period. Stakeholder meetings were held to determine feasibility of the plan, validate the proposed metrics, and develop “standard work” action plans for each stakeholder group in response to the metrics. This first phase also allowed care facilitators to objectively reflect on ED surge patterns throughout the day, and provided everyone in the ED team a frequent global snapshot of how “busy” the department was at any time. Finally, surge level data during this phase confirmed previous suspicions that the Output component was the biggest driver behind overall ED surge level.

Throughout this phase, the ED clinical manager recorded all the usual actions taken in response to the different level of surges as felt appropriate by the individual care facilitator on duty. The variety of actions and types of escalations were collected and fed back to weekly workgroup meetings to help further refine crafting of standardized action plans for implementation of the surge protocol.

Phase II (June - Aug 2013)

An initial trial of a limited ED surge protocol was rolled out at both ED campuses, with actual action items being triggered in response to specific surge level metrics. The main focus of this PDSA cycle was to collect data on how the care facilitator groups at the 2 campuses utilized the surge protocol, as well as feedback on usability, barriers, and effectiveness. Regular audits were performed to ensure surge measurement and compliance rates. Educational sessions were provided regarding rationale and purpose of the plan so that all team members had a better understanding of ED surges. Frequent meetings with stakeholders to share updates continued throughout Phase II, allowing further engagement as well as fine-tuning of stakeholder action plans based on real-time experiences.

Phase III (Aug 2013 - Dec 2013)

The next phase of implementation expanded beyond the ED and included the hospital’s off-hours and off-service management group. This in effect was the official corporate roll-out of the ED surge protocol including full action plans for all stakeholders, including off-service clinical administrators, inpatient flow managers, and the director of emergency and critical care. Regular audits were performed to ensure compliance of measurement every 2 hours as well as performance of specified action items related to each surge level, with the actual surge level measurement completion rates of 98%.

Data Collection and Analysis

Over the study period April 2013 to December 2013 at the Civic campus and June 2013 to December 2013 at the General campus, ED surge levels were measured every 2 hours by the care facilitators and manually recorded in standardized ED surge protocol booklets. These were subsequently entered into Excel database for tracking and data analysis. Patient volumes and hospital occupancy levels were recorded daily. Perceptions of the primary users of the surge protocol (ie, care facilitators) were obtained via standardized interviews and polls. We present descriptive statistics and statistical process control (SPC) charts. Chi-squared test was performed for comparison of pre- and post-intervention frequencies of outcome measures.

Outcome Measures

The main outcome measure was the frequency of sustained (≥ 6 hours) high surges, a marker of inability to respond effectively. High surges were defined as Moderate and Major surges combined. Our expert group consensus was that combinging the Moderate and Major surge categories to represent “high” surge was reasonable since they both require mobilizing resources on a hospital-wide level, and failure to improve despite 6 continuous hours of actively trying to address such high surges would lead to significantly higher risk for quality of care and patient safety issues.

Secondary outcomes include overall frequency of reaching high surge levels at various components of the Input-Throughput-Output ED flow model, hospital occupancy levels, and care facilitators’ perceptions on workload and overall effectiveness of the surge protocol.

Results

ED Flow

Table 2 presents the summary statistics for both campuses comparing the pre- and post-implementation time periods. During the study period, the average number of daily ED visits decreased slightly by 10 patients per day (pre 439.4, post 429.4, P = 0.04), while the average daily hospital occupancy levels steadily rose above 100% (pre 99.5%, post 101.2%, P = 0.01). Despite rising hospital occupancy levels, the proportion of time the ED reached high surge levels decreased for Input (pre 4.4%, post 2.7%, P = 0.01) and Throughput (pre 20.5%, post 18.1%, P = 0.08) components of ED flow after implementation. The frequency of high surges in the Output component did significantly increase (pre 7.7%, post 10.8%, P = 0.002).

Statistical Process Control Charts

Figure 3 shows SPC charts for the different Input-Throughput-Output components of the 2 different ED campuses over the study period. Daily frequency of sustained high surges lasting 6 or more consecutive hours were plotted along with hospital occupancy levels. The number of times data points rose above the upper limit of the SPC chart (ie, above normal expected variation) pre- and post-intervention were used for statistical comparison. Overall for the 2 campuses combined, the frequency of sustained high surges above normal variation remained stable for Input and Throughput (pre 4.5%, post 0.0%, P = 0.13; pre 3.5%, post 2.7% , P = 0.54) components of the ED flow model, respectively. More importantly, the frequency of sustained high surges in the Output component decreased, reaching statistical significance [pre:7.7% vs post:2.0% , P = 0.01], despite a rise in the total number of times the ED reached severe Output surges and overall hospital occupant levels.

Survey of Care Facilitators

The primary users and drivers of the surge protocol—the care facilitator group—felt strongly that the tool was easy to use and that it made a positive difference. 72% felt that the ED surge protocol has increased their workload but 92% felt that it was good for overall flow of the ED. Specific feedback included having a much more standardized language around communicating (and acting on) surges, and a better overall bird’s-eye view of the department.

Discussion

Despite a call for urgent research on implementing solutions targeting daily ED surges (vs. global ED crowding) over a decade ago at the Academic Emergency Medicine 2006 Consensus Conference [12], little work has been published on distinguishing, measuring, and dealing with ED surges. McCarthy et al proposed the rate of patient arrivals to the ED by time of day as a rudimentary definition of surge, although they provided very little specific guidance on what to do with that information in the setting of responding to spikes in surges [13]. Asplin et al described a number of theoretical models to bridge ED census, daily surges, length of stay and quality of care, however they were never validated in real-life scenarios [14]. A systematic review published in 2009 summarizing articles that described theoretical and practical ED surge responses found a large heterogeneity of different proposed models with little standardization and multiple shortcomings [15].

To our knowledge, this study is the first to report on the actual development, implementation, and evaluation of a daily ED surge protocol that utilizes a widely accepted conceptual model of ED flow. Unlike single global measure of ED crowding, our protocol measures frequent surge levels for various Input-Throughput-Output components of the ED, which are tied directly to standardized specific actions to address underlying root causes. Despite continued rise in hospital occupant levels and budgetary restraints, we found a improvement in the number of times the ED actually hit severe surges with the exception of Output, which is expected since this component of the flow model is intimately tied to hospital occupant levels. When severe surges did happen, we were able to deal with them much more effectively and efficiently, resulting in an overall decrease in sustained surges in the ED including the Output component.

Limitations

Similar to other pragmatic quality improvement projects that rely on manual processes, it was difficult to ensure absolute compliance of surge level measurements throughout the study period. As a result, there were occasional missing surge level data at various times of different days. However, we believe these are relatively nonsignificant occurrences that balanced out over the pre- and post-implementation periods. In addition, we did not have the resources to robustly record and confirm completion of specific action items that were activated in response to various surge levels, although we did confirm verbally with frontline workers regularly that those actions were done. Future Plan-Do-Study-Act cycles will focus on explicit measurement of actual completed action items and further refinement of targeted responses to surge. Finally, while we were able to only collect and present data over a relatively short period of evaluation (and thus potentially susceptible to seasonal variations in ED flow), we believe that our data does support the surge protocol’s effectiveness when compared to the robust trend of hospital occupant levels.

Future Directions

This ED surge protocol can be adapted and modified to fit any ED. The specific criteria defining Minor/Moderate/Major surges can be set up as ratios or percentages relative to total number of monitors, beds, etc., available. The principles of linking actions directly to specific triggers within each Input/Throughput/Output category could be translated to fit any-sized organization. Currently in progress is a longer evaluation period and based upon the results as well as individual feedback, necessary adjustments to our definitions, criteria and action items will be considered as part of ongoing quality improvement. The principles of our surge protocol are not limited to the ED, and we will explore its implementation in other hospital departments as well as methods to link them together in alignment with the hospital’s overall corporate strategy in tackling overcrowding.

Conclusion

In summary, implementation of this novel ED surge protocol led to a more effective response and management of high surges, despite significant increase in overall hospital occupancy rates and associated frequency of surges in the Output component of the ED flow model. Our surge measurement tool is capable of identifying within which area of the ED  surges are occurring, and our  ED surge protocol links specific actions to address those specific root causes. We believe this will lead not only to more accurate assessments of overall ED crowding but also to more timely and effective departmental and institutional responses.

 

Corresponding author: Dr. Edmund S.H. Kwok, Dept. of Emergency Medicine, Ottawa Hospital, Civic Campus, 1053 Carling Ave., Ottawa, ON, Canada K1Y 4E9, [email protected].

Financial disclosures: None.

References

1. Bond K. Interventions to reduce overcrowding in emergency departments. [Technology report no 67.4]. Ottawa: Canadian Agency for Drugs and Technologies in Health; 2006.

2. Richardson DB, et al. Increase in patient mortality at 10 days associated with emergency department overcrowding. Med J Aust 2006;184:213–6.

3. Sprivulis PC, et al. The association between hospital overcrowding and mortality among patients admitted via Western Australian emergency departments. Med J Aust 2006; 184:208–12.

4. Asplin BR, Magid DJ, Rhodes KV, et al. A conceptual model of emergency department crowding. Ann Emerg Med 2003; 42:173–80.

5. Affleck A, Parks P, Drummond A, et al. Emergency department overcrowding and access block. CAEP Position Statement. CJEM 2013;15:359–70.

6. Weiss SJ, Derlet R, Arndahl J, et al. Estimating the degree of emergency department overcrowding in academic medical centers: results of the National ED Overcrowding Study (NEDOCS). Acad Emerg Med 2004;11:38–50.

7. Weiss SJ, Ernst AA, Nick TG. Comparison of the National Emergency Department Overcrowding Scale and the Emergency Department Work Index for quantifying emergency department crowding. Acad Emerg Med 2006;13:513–8.

8. Jones SS, Allen TL, Welch SJ. An independent evaluation of four quantitative emergency department crowding scales. Acad Emerg Med 2006;13:1204–11

9. Bernstein SL, Verghese V, Leung W, et al. Development and validation of a new index to measure emergency department crowding. Acad Emerg Med 2003;10:938–42

10. General Accounting Office. Hospital emergency departments–crowded conditions vary among hospitals and communities. GAO-03-460. Washington, DC: US General Accounting Office; 2003.

11. Moseley MG, Dickerson CL, Kasey J, et al. Surge: a organizational response to emergency department overcrowding. J Clin Outcomes Manage 2010;17:453–7.

12. Jenkins JL, O’Connor RE, Cone DC. Differentiating large-scale surge versus daily surge. Acad Emerg Med 2006; 13:1169–72.

13. McCarthy ML, Aronsky D, Kelen GD. The measurement of daily surge and its relevance to disaster preparedness. Acad Emerg Med 2006; 13:1138–41.

14. Asplin BR, Flottemesch TJ, Gordon B. Developing models for patient flow and daily surge capacity research. Acad Emerg Med 2006;13:1109–13.

15. Nager AL, Khanna K. Emergency department surge: models and practical implications. J Trauma 2009; 67(2 Suppl):S96–9.

References

1. Bond K. Interventions to reduce overcrowding in emergency departments. [Technology report no 67.4]. Ottawa: Canadian Agency for Drugs and Technologies in Health; 2006.

2. Richardson DB, et al. Increase in patient mortality at 10 days associated with emergency department overcrowding. Med J Aust 2006;184:213–6.

3. Sprivulis PC, et al. The association between hospital overcrowding and mortality among patients admitted via Western Australian emergency departments. Med J Aust 2006; 184:208–12.

4. Asplin BR, Magid DJ, Rhodes KV, et al. A conceptual model of emergency department crowding. Ann Emerg Med 2003; 42:173–80.

5. Affleck A, Parks P, Drummond A, et al. Emergency department overcrowding and access block. CAEP Position Statement. CJEM 2013;15:359–70.

6. Weiss SJ, Derlet R, Arndahl J, et al. Estimating the degree of emergency department overcrowding in academic medical centers: results of the National ED Overcrowding Study (NEDOCS). Acad Emerg Med 2004;11:38–50.

7. Weiss SJ, Ernst AA, Nick TG. Comparison of the National Emergency Department Overcrowding Scale and the Emergency Department Work Index for quantifying emergency department crowding. Acad Emerg Med 2006;13:513–8.

8. Jones SS, Allen TL, Welch SJ. An independent evaluation of four quantitative emergency department crowding scales. Acad Emerg Med 2006;13:1204–11

9. Bernstein SL, Verghese V, Leung W, et al. Development and validation of a new index to measure emergency department crowding. Acad Emerg Med 2003;10:938–42

10. General Accounting Office. Hospital emergency departments–crowded conditions vary among hospitals and communities. GAO-03-460. Washington, DC: US General Accounting Office; 2003.

11. Moseley MG, Dickerson CL, Kasey J, et al. Surge: a organizational response to emergency department overcrowding. J Clin Outcomes Manage 2010;17:453–7.

12. Jenkins JL, O’Connor RE, Cone DC. Differentiating large-scale surge versus daily surge. Acad Emerg Med 2006; 13:1169–72.

13. McCarthy ML, Aronsky D, Kelen GD. The measurement of daily surge and its relevance to disaster preparedness. Acad Emerg Med 2006; 13:1138–41.

14. Asplin BR, Flottemesch TJ, Gordon B. Developing models for patient flow and daily surge capacity research. Acad Emerg Med 2006;13:1109–13.

15. Nager AL, Khanna K. Emergency department surge: models and practical implications. J Trauma 2009; 67(2 Suppl):S96–9.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Publications
Topics
Article Type
Display Headline
A Novel Emergency Department Surge Protocol: Implementation of a Targeted Response Plan
Display Headline
A Novel Emergency Department Surge Protocol: Implementation of a Targeted Response Plan
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Do Women With Knee Osteoarthritis Experience Greater Pain Sensitivity Than Men?

Article Type
Changed
Thu, 09/19/2019 - 13:30
Display Headline
Do Women With Knee Osteoarthritis Experience Greater Pain Sensitivity Than Men?

Among patients with osteoarthritis of the knee, women experienced greater sensitivity to various pain modalities, such as lower tolerance to heat, cold, and pressure, and greater widespread pain than men, according to a study published online ahead of print October 5 in Arthritis Care & Research.

“Many questions still remain as to why women with knee osteoarthritis are more sensitive to painful stimuli than are men. While therapeutic approaches to control pain are only beginning to take these sex differences into account, there is still quite a bit of research yet to be done to help reduce this gender gap and improve clinical therapies for men and women alike,” said lead author Emily J. Bartley, PhD, a Research Assistant Professor at the University of Florida College of Dentistry in Gainsville.

Emily J. Bartley, PhD

For this study, 288 participants between the ages of 45 and 85 completed a battery of quantitative sensory pain procedures assessing sensitivity to contact heat, cold pressor, mechanical pressure, and punctate stimuli. Differences in temporal summation were examined, along with measures of clinical pain and functional performance.

When compared to men, women exhibited greater sensitivity to multiple pain modalities (eg, lower heat, cold, pressure thresholds/tolerances, greater temporal summation of pain). There were no sex differences in clinical pain with the exception of greater widespread pain observed in women. Although there were select age-related differences in pain sensitivity, sex differences in pain varied minimally across age cohort.

“Overall, these findings provide evidence for greater overall sensitivity to experimental pain in women with symptomatic knee osteoarthritis compared with men, suggesting that enhanced central sensitivity may be an important contributor to pain in this group,” wrote Dr. Bartley and colleagues.

References

Suggested Reading
Bartley EJ, King CD, Sibille KT, et al. Enhanced pain sensitivity among individuals with symptomatic knee osteoarthritis: potential sex differences in central sensitization. Arthritis Care Res (Hoboken). 2015 Oct 5. [Epub ahead of print].

Author and Disclosure Information

Publications
Topics
Legacy Keywords
AJO, Knee, Osteoarthritis, women, Emily J. Bartley,
Author and Disclosure Information

Author and Disclosure Information

Among patients with osteoarthritis of the knee, women experienced greater sensitivity to various pain modalities, such as lower tolerance to heat, cold, and pressure, and greater widespread pain than men, according to a study published online ahead of print October 5 in Arthritis Care & Research.

“Many questions still remain as to why women with knee osteoarthritis are more sensitive to painful stimuli than are men. While therapeutic approaches to control pain are only beginning to take these sex differences into account, there is still quite a bit of research yet to be done to help reduce this gender gap and improve clinical therapies for men and women alike,” said lead author Emily J. Bartley, PhD, a Research Assistant Professor at the University of Florida College of Dentistry in Gainsville.

Emily J. Bartley, PhD

For this study, 288 participants between the ages of 45 and 85 completed a battery of quantitative sensory pain procedures assessing sensitivity to contact heat, cold pressor, mechanical pressure, and punctate stimuli. Differences in temporal summation were examined, along with measures of clinical pain and functional performance.

When compared to men, women exhibited greater sensitivity to multiple pain modalities (eg, lower heat, cold, pressure thresholds/tolerances, greater temporal summation of pain). There were no sex differences in clinical pain with the exception of greater widespread pain observed in women. Although there were select age-related differences in pain sensitivity, sex differences in pain varied minimally across age cohort.

“Overall, these findings provide evidence for greater overall sensitivity to experimental pain in women with symptomatic knee osteoarthritis compared with men, suggesting that enhanced central sensitivity may be an important contributor to pain in this group,” wrote Dr. Bartley and colleagues.

Among patients with osteoarthritis of the knee, women experienced greater sensitivity to various pain modalities, such as lower tolerance to heat, cold, and pressure, and greater widespread pain than men, according to a study published online ahead of print October 5 in Arthritis Care & Research.

“Many questions still remain as to why women with knee osteoarthritis are more sensitive to painful stimuli than are men. While therapeutic approaches to control pain are only beginning to take these sex differences into account, there is still quite a bit of research yet to be done to help reduce this gender gap and improve clinical therapies for men and women alike,” said lead author Emily J. Bartley, PhD, a Research Assistant Professor at the University of Florida College of Dentistry in Gainsville.

Emily J. Bartley, PhD

For this study, 288 participants between the ages of 45 and 85 completed a battery of quantitative sensory pain procedures assessing sensitivity to contact heat, cold pressor, mechanical pressure, and punctate stimuli. Differences in temporal summation were examined, along with measures of clinical pain and functional performance.

When compared to men, women exhibited greater sensitivity to multiple pain modalities (eg, lower heat, cold, pressure thresholds/tolerances, greater temporal summation of pain). There were no sex differences in clinical pain with the exception of greater widespread pain observed in women. Although there were select age-related differences in pain sensitivity, sex differences in pain varied minimally across age cohort.

“Overall, these findings provide evidence for greater overall sensitivity to experimental pain in women with symptomatic knee osteoarthritis compared with men, suggesting that enhanced central sensitivity may be an important contributor to pain in this group,” wrote Dr. Bartley and colleagues.

References

Suggested Reading
Bartley EJ, King CD, Sibille KT, et al. Enhanced pain sensitivity among individuals with symptomatic knee osteoarthritis: potential sex differences in central sensitization. Arthritis Care Res (Hoboken). 2015 Oct 5. [Epub ahead of print].

References

Suggested Reading
Bartley EJ, King CD, Sibille KT, et al. Enhanced pain sensitivity among individuals with symptomatic knee osteoarthritis: potential sex differences in central sensitization. Arthritis Care Res (Hoboken). 2015 Oct 5. [Epub ahead of print].

Publications
Publications
Topics
Article Type
Display Headline
Do Women With Knee Osteoarthritis Experience Greater Pain Sensitivity Than Men?
Display Headline
Do Women With Knee Osteoarthritis Experience Greater Pain Sensitivity Than Men?
Legacy Keywords
AJO, Knee, Osteoarthritis, women, Emily J. Bartley,
Legacy Keywords
AJO, Knee, Osteoarthritis, women, Emily J. Bartley,
Article Source

PURLs Copyright

Inside the Article

Improved Appointment Reminders at a VA Ophthalmology Clinic

Article Type
Changed
Tue, 03/06/2018 - 10:57
Display Headline
Improved Appointment Reminders at a VA Ophthalmology Clinic

From the Gavin Herbert Eye Institute, Department of Ophthalmology, University of California-Irvine, Irvine, CA.

Abstract

  • Objective: To describe a change in mail notification approach at a Veterans Affairs hospital and its impact on appointments.
  • Methods: A new notification system was implemented in which the information on the notice was limited to the date, time, and location of the appointment. Previously, the notice contained information about patients’ appointment time in addition to listing methods of rescheduling, patient account information, and a variety of VA policies presented in a disorganized manner. We assessed whether there was a reduction in number of patients who had to be redirected by clinical staff at the ophthalmology clinic at the Long Beach Veterans Affairs Hospital.
  • Results: The mean number patients who visited the clinic mistakenly during the 14 days prior to the new notification system was 14.93 (SD = 6.05) compared with 9.4 (SD = 3.45; = 0.005) during the 15 days after.
  • Conclusion: A simple abbreviated notification has the potential to improve patient understanding and can increase clinical efficiency, ultimately reducing health costs.

 

Across the United States, patients at Veterans Affairs (VA) hospitals are routinely informed of their clinic appointments via both telephone and conventional mail notifications. Prior studies have confirmed that appointment reminders reduce no-show rates, thus increasing clinical efficiency and decreasing health care costs [1–10]. At the same time, avoiding occasions where patients who do not have an appointment arrive erroneously also enhances efficiency, allowing clinic staff to focus their attention on patients assigned to their clinic.

Recently a new notification system was implemented at our medical center. This new notification system involves a folded mailer delivered by the US Postal Service that provides patients only with the essential information necessary for timely arrival at the correct location to their appointments. The telephone notification system that works in tandem with this has been retained. In this report, we describe the change and the results seen in our clinic.

Methods

Setting

The Veterans Affairs Long Beach Healthcare system maintains a large teaching medical campus in Long Beach, CA. The medical center and its community clinics employ more than 2200 full-time employees and provide care for more than 50,000 veterans. There are 37 outpatient clinics located on the main campus.

Our study was conducted in the outpatient ophthalmology clinic, located on the main campus. The clinic is open 8 am to 4 pm Monday through Friday and sees on average 30 to 40 patients per day with a front desk staffed with 2 secretaries. When a patient arrives to the clinic, their information (name, social security number, date/time of appointment, etc.) is looked up in a national database. If the patient arrives at the incorrect location, time is additionally spent redirecting patients.

Notification System

In the prior notification scheme, patients were mailed in an envelope a notice printed on 8.5" x 11" paper that included their appointment time and also listed methods of rescheduling, their account information, and a variety of VA policies presented in a disorganized manner (Figure 1). The new notice is a 4" x 5" folded card stock mailer that provides only the time, date and location of appointment in the message section, containing a map and driving directions on the back of the card (Figure 2).

Assessment

We assessed the effectiveness of this new notification system by monitoring the number of patients who arrived at our clinic by mistake. The 2 clinic secretaries recorded daily on a piece of paper the number of patients who arrived in clinic 

requesting the services of another office. In addition, the clinic secretaries were asked at the end of each day to estimate the average time required to redirect patients to their correct destination per encounter, including collecting patient information in the VA national database, redirecting them with verbal and pictorial instructions, asking technicians to assist patients with redirection, calling other clinics to ensure timely arrival, etc.

Results

During the 14 days prior to the change in notification system, the mean number patients per day who visited the clinic mistakenly was 14.93 (SD = 6.05). After the implementation of the new notification system, the mean number was 9.4 (SD = 3.45; = 0.005) for the 15-day period after the change was implemented.

The mean number of minutes required to redirect patients was estimated to be 2.28 minutes prior to the change and 2.53 after (P = 0.507), which equates to an average of 40.64 minutes per day in the initial study period and 22.93 minutes per day in the second study period (P = 0.05). Assuming a secretary makes on average $22/hour, the Long Beach VA spent an average of $14.90 and $8.41 per day, respectively, redirecting mistaken patients to their correct clinic before and after the new notice system was implemented.

Discussion

The Veterans Health Administration is America’s largest integrated health care system, and is one of the few health systems—and by far the largest—that is virtually paperless [11]. Their medical records are nearly wholly electronic. The VA’s use of electronic health records has significantly enhanced the quality of patient care and improved productivity [12]. Their ongoing mission includes identifying and evaluating strategies that lead to high-quality and cost-effective care for veterans.

Effective appointment notifications have the potential to increase productivity and thus save money. Developing tailored methods of informing patients of the time and location of their clinic appointment improves the accuracy with which patients arrive at large medical campuses, which translates into a more efficient clinic flow. The simpler, abbreviated notification implemented at our VA appears to improve patient understanding of time/location of their clinic appointment, based on the decreased number of patients arriving in error to the ophthalmology clinic. It is unclear which specific aspect of this new mailed notification system is responsible for our results (addition of map, new layout, down-scaling of notification etc). Limitations to our study included reliance on the secretaries’ subjective reporting of  time spent redirecting and limiting our data collection to one clinic. In addition, patients arriving at appointments may have received the old notice. Further investigations are underway to study appointment notification improvements across various hospitals and clinics.

In summary, efficiency is a key component in allowing the Veterans Affairs to provide quality care for the patients under its purview. We show that an abbreviated notification system was associated with a reduction in the need to redirect patients arriving by mistake to our office. Assuming that this abbreviated notification system would benefit the 37 outpatient clinics at the VA in Long Beach, CA as it did with the ophthalmology clinic, this new notification system has the potential to save $240 dollars/day and generate a yearly savings of $87,689.

 

Corresponding author: Bradley Jacobsen, Irvine School of Medicine, University of California, 1001 Health Sciences Road, 252 Irvine Hall Irvine, CA 92697 [email protected].

References

1. Bigby J, Giblin J, Pappius EM, Goldman L. Appointment reminders to reduce no-show rates. A stratified analysis of their cost-effectiveness. JAMA 1983;250:1742–5.

2. Frankel BS, Hoevell MF. Health service appointment keeping: A behavioral view and critical review. Behav Modification 1978;2:435–64.

3. Friman PC, Finney JW, Rapoff MA, Christophersen ER. Improving pediatric appointment keeping with reminders and reduced response requirement. J Appl Behav Anal 1985;18:315–21.

4. Gariti P, Alterman AI, Holub-Beyer E, et al. Effects of an appointment reminder call on patient show rates. J Subst Abuse Treat 1995;12:207–12.

5. Grover S, Gagnon G, Flegel KM, Hoey JR. Improving appointment-keeping by patients new to a hospital medical clinic with telephone or mailed reminders. Can Med Assoc J 1983;129:1101–3.

6. Levy R, Claravall V. Differential effects of a phone reminder on appointment keeping for patients with long and short between-visit intervals. Med Care 1977;15:435–8.

7. Morse DL, Coulter MP, Nazarian LF, Napodano RJ. Waning effectiveness of mailed reminders on reducing broken appointments. Pediatrics 1981;68:846–9.

8. Schroeder SA. Lowering broken appointment rates at a medical clinic. Med Care 1973;11:75–8.

9. Shepard DS, Moseley TA 3rd. Mailed versus telephoned appointment reminders to reduce broken appointments in a hospital outpatient department. Med Care 1976;14:268–73.

10. Turner AJ, Vernon JC. Prompts to increase attendance in a community mental-health center. J Appl Behav Anal 1976;9:141–5.

11. Brown SH, Lincoln MJ, Groen PJ, Kolodner RM. VistA--US department of Veterans Affairs national-scale HIS. Int J Med Inform 2003;69:135–56.

12. Evans DC, Nichol WP, Perlin JB. Effect of the implementation of an enterprise-wide electronic health record on productivity in the Veterans Health Administration. Health Econ Policy Law 2006:1:163–9.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Sections

From the Gavin Herbert Eye Institute, Department of Ophthalmology, University of California-Irvine, Irvine, CA.

Abstract

  • Objective: To describe a change in mail notification approach at a Veterans Affairs hospital and its impact on appointments.
  • Methods: A new notification system was implemented in which the information on the notice was limited to the date, time, and location of the appointment. Previously, the notice contained information about patients’ appointment time in addition to listing methods of rescheduling, patient account information, and a variety of VA policies presented in a disorganized manner. We assessed whether there was a reduction in number of patients who had to be redirected by clinical staff at the ophthalmology clinic at the Long Beach Veterans Affairs Hospital.
  • Results: The mean number patients who visited the clinic mistakenly during the 14 days prior to the new notification system was 14.93 (SD = 6.05) compared with 9.4 (SD = 3.45; = 0.005) during the 15 days after.
  • Conclusion: A simple abbreviated notification has the potential to improve patient understanding and can increase clinical efficiency, ultimately reducing health costs.

 

Across the United States, patients at Veterans Affairs (VA) hospitals are routinely informed of their clinic appointments via both telephone and conventional mail notifications. Prior studies have confirmed that appointment reminders reduce no-show rates, thus increasing clinical efficiency and decreasing health care costs [1–10]. At the same time, avoiding occasions where patients who do not have an appointment arrive erroneously also enhances efficiency, allowing clinic staff to focus their attention on patients assigned to their clinic.

Recently a new notification system was implemented at our medical center. This new notification system involves a folded mailer delivered by the US Postal Service that provides patients only with the essential information necessary for timely arrival at the correct location to their appointments. The telephone notification system that works in tandem with this has been retained. In this report, we describe the change and the results seen in our clinic.

Methods

Setting

The Veterans Affairs Long Beach Healthcare system maintains a large teaching medical campus in Long Beach, CA. The medical center and its community clinics employ more than 2200 full-time employees and provide care for more than 50,000 veterans. There are 37 outpatient clinics located on the main campus.

Our study was conducted in the outpatient ophthalmology clinic, located on the main campus. The clinic is open 8 am to 4 pm Monday through Friday and sees on average 30 to 40 patients per day with a front desk staffed with 2 secretaries. When a patient arrives to the clinic, their information (name, social security number, date/time of appointment, etc.) is looked up in a national database. If the patient arrives at the incorrect location, time is additionally spent redirecting patients.

Notification System

In the prior notification scheme, patients were mailed in an envelope a notice printed on 8.5" x 11" paper that included their appointment time and also listed methods of rescheduling, their account information, and a variety of VA policies presented in a disorganized manner (Figure 1). The new notice is a 4" x 5" folded card stock mailer that provides only the time, date and location of appointment in the message section, containing a map and driving directions on the back of the card (Figure 2).

Assessment

We assessed the effectiveness of this new notification system by monitoring the number of patients who arrived at our clinic by mistake. The 2 clinic secretaries recorded daily on a piece of paper the number of patients who arrived in clinic 

requesting the services of another office. In addition, the clinic secretaries were asked at the end of each day to estimate the average time required to redirect patients to their correct destination per encounter, including collecting patient information in the VA national database, redirecting them with verbal and pictorial instructions, asking technicians to assist patients with redirection, calling other clinics to ensure timely arrival, etc.

Results

During the 14 days prior to the change in notification system, the mean number patients per day who visited the clinic mistakenly was 14.93 (SD = 6.05). After the implementation of the new notification system, the mean number was 9.4 (SD = 3.45; = 0.005) for the 15-day period after the change was implemented.

The mean number of minutes required to redirect patients was estimated to be 2.28 minutes prior to the change and 2.53 after (P = 0.507), which equates to an average of 40.64 minutes per day in the initial study period and 22.93 minutes per day in the second study period (P = 0.05). Assuming a secretary makes on average $22/hour, the Long Beach VA spent an average of $14.90 and $8.41 per day, respectively, redirecting mistaken patients to their correct clinic before and after the new notice system was implemented.

Discussion

The Veterans Health Administration is America’s largest integrated health care system, and is one of the few health systems—and by far the largest—that is virtually paperless [11]. Their medical records are nearly wholly electronic. The VA’s use of electronic health records has significantly enhanced the quality of patient care and improved productivity [12]. Their ongoing mission includes identifying and evaluating strategies that lead to high-quality and cost-effective care for veterans.

Effective appointment notifications have the potential to increase productivity and thus save money. Developing tailored methods of informing patients of the time and location of their clinic appointment improves the accuracy with which patients arrive at large medical campuses, which translates into a more efficient clinic flow. The simpler, abbreviated notification implemented at our VA appears to improve patient understanding of time/location of their clinic appointment, based on the decreased number of patients arriving in error to the ophthalmology clinic. It is unclear which specific aspect of this new mailed notification system is responsible for our results (addition of map, new layout, down-scaling of notification etc). Limitations to our study included reliance on the secretaries’ subjective reporting of  time spent redirecting and limiting our data collection to one clinic. In addition, patients arriving at appointments may have received the old notice. Further investigations are underway to study appointment notification improvements across various hospitals and clinics.

In summary, efficiency is a key component in allowing the Veterans Affairs to provide quality care for the patients under its purview. We show that an abbreviated notification system was associated with a reduction in the need to redirect patients arriving by mistake to our office. Assuming that this abbreviated notification system would benefit the 37 outpatient clinics at the VA in Long Beach, CA as it did with the ophthalmology clinic, this new notification system has the potential to save $240 dollars/day and generate a yearly savings of $87,689.

 

Corresponding author: Bradley Jacobsen, Irvine School of Medicine, University of California, 1001 Health Sciences Road, 252 Irvine Hall Irvine, CA 92697 [email protected].

From the Gavin Herbert Eye Institute, Department of Ophthalmology, University of California-Irvine, Irvine, CA.

Abstract

  • Objective: To describe a change in mail notification approach at a Veterans Affairs hospital and its impact on appointments.
  • Methods: A new notification system was implemented in which the information on the notice was limited to the date, time, and location of the appointment. Previously, the notice contained information about patients’ appointment time in addition to listing methods of rescheduling, patient account information, and a variety of VA policies presented in a disorganized manner. We assessed whether there was a reduction in number of patients who had to be redirected by clinical staff at the ophthalmology clinic at the Long Beach Veterans Affairs Hospital.
  • Results: The mean number patients who visited the clinic mistakenly during the 14 days prior to the new notification system was 14.93 (SD = 6.05) compared with 9.4 (SD = 3.45; = 0.005) during the 15 days after.
  • Conclusion: A simple abbreviated notification has the potential to improve patient understanding and can increase clinical efficiency, ultimately reducing health costs.

 

Across the United States, patients at Veterans Affairs (VA) hospitals are routinely informed of their clinic appointments via both telephone and conventional mail notifications. Prior studies have confirmed that appointment reminders reduce no-show rates, thus increasing clinical efficiency and decreasing health care costs [1–10]. At the same time, avoiding occasions where patients who do not have an appointment arrive erroneously also enhances efficiency, allowing clinic staff to focus their attention on patients assigned to their clinic.

Recently a new notification system was implemented at our medical center. This new notification system involves a folded mailer delivered by the US Postal Service that provides patients only with the essential information necessary for timely arrival at the correct location to their appointments. The telephone notification system that works in tandem with this has been retained. In this report, we describe the change and the results seen in our clinic.

Methods

Setting

The Veterans Affairs Long Beach Healthcare system maintains a large teaching medical campus in Long Beach, CA. The medical center and its community clinics employ more than 2200 full-time employees and provide care for more than 50,000 veterans. There are 37 outpatient clinics located on the main campus.

Our study was conducted in the outpatient ophthalmology clinic, located on the main campus. The clinic is open 8 am to 4 pm Monday through Friday and sees on average 30 to 40 patients per day with a front desk staffed with 2 secretaries. When a patient arrives to the clinic, their information (name, social security number, date/time of appointment, etc.) is looked up in a national database. If the patient arrives at the incorrect location, time is additionally spent redirecting patients.

Notification System

In the prior notification scheme, patients were mailed in an envelope a notice printed on 8.5" x 11" paper that included their appointment time and also listed methods of rescheduling, their account information, and a variety of VA policies presented in a disorganized manner (Figure 1). The new notice is a 4" x 5" folded card stock mailer that provides only the time, date and location of appointment in the message section, containing a map and driving directions on the back of the card (Figure 2).

Assessment

We assessed the effectiveness of this new notification system by monitoring the number of patients who arrived at our clinic by mistake. The 2 clinic secretaries recorded daily on a piece of paper the number of patients who arrived in clinic 

requesting the services of another office. In addition, the clinic secretaries were asked at the end of each day to estimate the average time required to redirect patients to their correct destination per encounter, including collecting patient information in the VA national database, redirecting them with verbal and pictorial instructions, asking technicians to assist patients with redirection, calling other clinics to ensure timely arrival, etc.

Results

During the 14 days prior to the change in notification system, the mean number patients per day who visited the clinic mistakenly was 14.93 (SD = 6.05). After the implementation of the new notification system, the mean number was 9.4 (SD = 3.45; = 0.005) for the 15-day period after the change was implemented.

The mean number of minutes required to redirect patients was estimated to be 2.28 minutes prior to the change and 2.53 after (P = 0.507), which equates to an average of 40.64 minutes per day in the initial study period and 22.93 minutes per day in the second study period (P = 0.05). Assuming a secretary makes on average $22/hour, the Long Beach VA spent an average of $14.90 and $8.41 per day, respectively, redirecting mistaken patients to their correct clinic before and after the new notice system was implemented.

Discussion

The Veterans Health Administration is America’s largest integrated health care system, and is one of the few health systems—and by far the largest—that is virtually paperless [11]. Their medical records are nearly wholly electronic. The VA’s use of electronic health records has significantly enhanced the quality of patient care and improved productivity [12]. Their ongoing mission includes identifying and evaluating strategies that lead to high-quality and cost-effective care for veterans.

Effective appointment notifications have the potential to increase productivity and thus save money. Developing tailored methods of informing patients of the time and location of their clinic appointment improves the accuracy with which patients arrive at large medical campuses, which translates into a more efficient clinic flow. The simpler, abbreviated notification implemented at our VA appears to improve patient understanding of time/location of their clinic appointment, based on the decreased number of patients arriving in error to the ophthalmology clinic. It is unclear which specific aspect of this new mailed notification system is responsible for our results (addition of map, new layout, down-scaling of notification etc). Limitations to our study included reliance on the secretaries’ subjective reporting of  time spent redirecting and limiting our data collection to one clinic. In addition, patients arriving at appointments may have received the old notice. Further investigations are underway to study appointment notification improvements across various hospitals and clinics.

In summary, efficiency is a key component in allowing the Veterans Affairs to provide quality care for the patients under its purview. We show that an abbreviated notification system was associated with a reduction in the need to redirect patients arriving by mistake to our office. Assuming that this abbreviated notification system would benefit the 37 outpatient clinics at the VA in Long Beach, CA as it did with the ophthalmology clinic, this new notification system has the potential to save $240 dollars/day and generate a yearly savings of $87,689.

 

Corresponding author: Bradley Jacobsen, Irvine School of Medicine, University of California, 1001 Health Sciences Road, 252 Irvine Hall Irvine, CA 92697 [email protected].

References

1. Bigby J, Giblin J, Pappius EM, Goldman L. Appointment reminders to reduce no-show rates. A stratified analysis of their cost-effectiveness. JAMA 1983;250:1742–5.

2. Frankel BS, Hoevell MF. Health service appointment keeping: A behavioral view and critical review. Behav Modification 1978;2:435–64.

3. Friman PC, Finney JW, Rapoff MA, Christophersen ER. Improving pediatric appointment keeping with reminders and reduced response requirement. J Appl Behav Anal 1985;18:315–21.

4. Gariti P, Alterman AI, Holub-Beyer E, et al. Effects of an appointment reminder call on patient show rates. J Subst Abuse Treat 1995;12:207–12.

5. Grover S, Gagnon G, Flegel KM, Hoey JR. Improving appointment-keeping by patients new to a hospital medical clinic with telephone or mailed reminders. Can Med Assoc J 1983;129:1101–3.

6. Levy R, Claravall V. Differential effects of a phone reminder on appointment keeping for patients with long and short between-visit intervals. Med Care 1977;15:435–8.

7. Morse DL, Coulter MP, Nazarian LF, Napodano RJ. Waning effectiveness of mailed reminders on reducing broken appointments. Pediatrics 1981;68:846–9.

8. Schroeder SA. Lowering broken appointment rates at a medical clinic. Med Care 1973;11:75–8.

9. Shepard DS, Moseley TA 3rd. Mailed versus telephoned appointment reminders to reduce broken appointments in a hospital outpatient department. Med Care 1976;14:268–73.

10. Turner AJ, Vernon JC. Prompts to increase attendance in a community mental-health center. J Appl Behav Anal 1976;9:141–5.

11. Brown SH, Lincoln MJ, Groen PJ, Kolodner RM. VistA--US department of Veterans Affairs national-scale HIS. Int J Med Inform 2003;69:135–56.

12. Evans DC, Nichol WP, Perlin JB. Effect of the implementation of an enterprise-wide electronic health record on productivity in the Veterans Health Administration. Health Econ Policy Law 2006:1:163–9.

References

1. Bigby J, Giblin J, Pappius EM, Goldman L. Appointment reminders to reduce no-show rates. A stratified analysis of their cost-effectiveness. JAMA 1983;250:1742–5.

2. Frankel BS, Hoevell MF. Health service appointment keeping: A behavioral view and critical review. Behav Modification 1978;2:435–64.

3. Friman PC, Finney JW, Rapoff MA, Christophersen ER. Improving pediatric appointment keeping with reminders and reduced response requirement. J Appl Behav Anal 1985;18:315–21.

4. Gariti P, Alterman AI, Holub-Beyer E, et al. Effects of an appointment reminder call on patient show rates. J Subst Abuse Treat 1995;12:207–12.

5. Grover S, Gagnon G, Flegel KM, Hoey JR. Improving appointment-keeping by patients new to a hospital medical clinic with telephone or mailed reminders. Can Med Assoc J 1983;129:1101–3.

6. Levy R, Claravall V. Differential effects of a phone reminder on appointment keeping for patients with long and short between-visit intervals. Med Care 1977;15:435–8.

7. Morse DL, Coulter MP, Nazarian LF, Napodano RJ. Waning effectiveness of mailed reminders on reducing broken appointments. Pediatrics 1981;68:846–9.

8. Schroeder SA. Lowering broken appointment rates at a medical clinic. Med Care 1973;11:75–8.

9. Shepard DS, Moseley TA 3rd. Mailed versus telephoned appointment reminders to reduce broken appointments in a hospital outpatient department. Med Care 1976;14:268–73.

10. Turner AJ, Vernon JC. Prompts to increase attendance in a community mental-health center. J Appl Behav Anal 1976;9:141–5.

11. Brown SH, Lincoln MJ, Groen PJ, Kolodner RM. VistA--US department of Veterans Affairs national-scale HIS. Int J Med Inform 2003;69:135–56.

12. Evans DC, Nichol WP, Perlin JB. Effect of the implementation of an enterprise-wide electronic health record on productivity in the Veterans Health Administration. Health Econ Policy Law 2006:1:163–9.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Publications
Article Type
Display Headline
Improved Appointment Reminders at a VA Ophthalmology Clinic
Display Headline
Improved Appointment Reminders at a VA Ophthalmology Clinic
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

AAP: Return-to-play protocols for teen athletes often neglected

Article Type
Changed
Fri, 01/18/2019 - 15:21
Display Headline
AAP: Return-to-play protocols for teen athletes often neglected

WASHINGTON Half of parents and two in five coaches would not follow required return-to-play rules after a child suffers a hard head hit in organized sports, suggests a recent study.

“The findings underscore the need for educating both coaches and parents on consequences leading to concussion,” concluded Edward J. Hass, Ph.D., director of research and outcomes at the Nemours Center for Children’s Health Media, and his associates in their abstract presented at the annual meeting of the American Academy of Pediatrics.

©s-o-s/thinkstockphotos.com

Return-to-play protocols refer to the series of steps that should be followed after a child’s head injury and before the child participates in the sports activity again, pulling them out of the practice or game and waiting for a doctor to medically okay them before they return to the practice or the game situation. Intermediate steps include ensuring the child can do aerobic activity, then begin strengthening activity, then start practice, and then finally enter a game situation, Dr. Hass explained in an interview.

“The implications of this work are not for the purposes of preventing a primary injury,” Dr. Hass said. “Increasing knowledge of symptoms and of what can result from concussion is not going to prevent the initial injury, but it can certainly prevent further damage to the young brain by having a child going back in before they’re healed from their concussive symptoms.”

Dr. Hass’s team conducted an online survey of 506 U.S. visitors to the KidsHealth.org website owned by Nemours, between Jan. 13, 2015, and Feb. 11, 2015. Respondents included 331 noncoach parents of children aged 18 years and under, 86 coach-parents, and 89 coaches without children – “people who were visiting our website and presumably involved in or interested in children’s health,” Dr. Hass said during his abstract presentation.

In the survey, 50% of noncoach parents and 56% of coaches reported they would follow the steps of return-to-play protocol, pulling the child out of play without a return until a medical approval. The remaining respondents would either allow the player to return if the player wanted to, have the player sit for 15 minutes and return when he or she felt okay, or only sit out the rest of the game or practice.

“These findings would suggest that 20% of the time on the field of play, you have a child who doesn’t have an advocate for brain safety,” Dr. Hass said during his presentation. The abstract notes that symptoms requiring emergency treatment “would not receive such urgency 25% to 50% of the time.”

The survey also asked about what respondents would do regarding each of several different symptoms following a head hit, using a 5-point scale for each symptom: no special care; let child rest at home; take the child to the doctor in a day or 2; call the doctor right away; or take the child to emergency care right away. Symptoms ranged from concussion symptoms, such as blurry vision, headache, walking unsteadily, vomiting, difficulty concentrating, and loss of consciousness, to unrelated concerns, such as sudden hunger or body aches.

Analysis of these answers and the question of whether the respondent would allow a child to sleep following a head hit revealed a two distinct groups, the researchers found.

“There’s clearly two different kinds of mentalities going on, the more cautious ‘take no chances’ group and the less cautious ‘watchful-waiting group,’ ” Dr. Hass said. Both groups are equally good at symptom discrimination, such as walking unsteadily or hearing a player say they have blurred vision or a headache, he said. But the watchful-waiters, 25% of the respondents and predominantly male, are less likely to follow return-to-play protocols.

“It’s lack of awareness of what the symptoms mean,” Dr. Hass said. “If the child is experiencing blurred vision, that could be a sign of concussion, and that’s a brain injury and something that requires medical attention.”

The study was funded by Dr. Hass’s employer, Nemours Center for Children’s Health Media. Dr. Hass reported no other disclosures.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
Return to play protocols, teens, concussion, sports, athletes, aap
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

WASHINGTON Half of parents and two in five coaches would not follow required return-to-play rules after a child suffers a hard head hit in organized sports, suggests a recent study.

“The findings underscore the need for educating both coaches and parents on consequences leading to concussion,” concluded Edward J. Hass, Ph.D., director of research and outcomes at the Nemours Center for Children’s Health Media, and his associates in their abstract presented at the annual meeting of the American Academy of Pediatrics.

©s-o-s/thinkstockphotos.com

Return-to-play protocols refer to the series of steps that should be followed after a child’s head injury and before the child participates in the sports activity again, pulling them out of the practice or game and waiting for a doctor to medically okay them before they return to the practice or the game situation. Intermediate steps include ensuring the child can do aerobic activity, then begin strengthening activity, then start practice, and then finally enter a game situation, Dr. Hass explained in an interview.

“The implications of this work are not for the purposes of preventing a primary injury,” Dr. Hass said. “Increasing knowledge of symptoms and of what can result from concussion is not going to prevent the initial injury, but it can certainly prevent further damage to the young brain by having a child going back in before they’re healed from their concussive symptoms.”

Dr. Hass’s team conducted an online survey of 506 U.S. visitors to the KidsHealth.org website owned by Nemours, between Jan. 13, 2015, and Feb. 11, 2015. Respondents included 331 noncoach parents of children aged 18 years and under, 86 coach-parents, and 89 coaches without children – “people who were visiting our website and presumably involved in or interested in children’s health,” Dr. Hass said during his abstract presentation.

In the survey, 50% of noncoach parents and 56% of coaches reported they would follow the steps of return-to-play protocol, pulling the child out of play without a return until a medical approval. The remaining respondents would either allow the player to return if the player wanted to, have the player sit for 15 minutes and return when he or she felt okay, or only sit out the rest of the game or practice.

“These findings would suggest that 20% of the time on the field of play, you have a child who doesn’t have an advocate for brain safety,” Dr. Hass said during his presentation. The abstract notes that symptoms requiring emergency treatment “would not receive such urgency 25% to 50% of the time.”

The survey also asked about what respondents would do regarding each of several different symptoms following a head hit, using a 5-point scale for each symptom: no special care; let child rest at home; take the child to the doctor in a day or 2; call the doctor right away; or take the child to emergency care right away. Symptoms ranged from concussion symptoms, such as blurry vision, headache, walking unsteadily, vomiting, difficulty concentrating, and loss of consciousness, to unrelated concerns, such as sudden hunger or body aches.

Analysis of these answers and the question of whether the respondent would allow a child to sleep following a head hit revealed a two distinct groups, the researchers found.

“There’s clearly two different kinds of mentalities going on, the more cautious ‘take no chances’ group and the less cautious ‘watchful-waiting group,’ ” Dr. Hass said. Both groups are equally good at symptom discrimination, such as walking unsteadily or hearing a player say they have blurred vision or a headache, he said. But the watchful-waiters, 25% of the respondents and predominantly male, are less likely to follow return-to-play protocols.

“It’s lack of awareness of what the symptoms mean,” Dr. Hass said. “If the child is experiencing blurred vision, that could be a sign of concussion, and that’s a brain injury and something that requires medical attention.”

The study was funded by Dr. Hass’s employer, Nemours Center for Children’s Health Media. Dr. Hass reported no other disclosures.

WASHINGTON Half of parents and two in five coaches would not follow required return-to-play rules after a child suffers a hard head hit in organized sports, suggests a recent study.

“The findings underscore the need for educating both coaches and parents on consequences leading to concussion,” concluded Edward J. Hass, Ph.D., director of research and outcomes at the Nemours Center for Children’s Health Media, and his associates in their abstract presented at the annual meeting of the American Academy of Pediatrics.

©s-o-s/thinkstockphotos.com

Return-to-play protocols refer to the series of steps that should be followed after a child’s head injury and before the child participates in the sports activity again, pulling them out of the practice or game and waiting for a doctor to medically okay them before they return to the practice or the game situation. Intermediate steps include ensuring the child can do aerobic activity, then begin strengthening activity, then start practice, and then finally enter a game situation, Dr. Hass explained in an interview.

“The implications of this work are not for the purposes of preventing a primary injury,” Dr. Hass said. “Increasing knowledge of symptoms and of what can result from concussion is not going to prevent the initial injury, but it can certainly prevent further damage to the young brain by having a child going back in before they’re healed from their concussive symptoms.”

Dr. Hass’s team conducted an online survey of 506 U.S. visitors to the KidsHealth.org website owned by Nemours, between Jan. 13, 2015, and Feb. 11, 2015. Respondents included 331 noncoach parents of children aged 18 years and under, 86 coach-parents, and 89 coaches without children – “people who were visiting our website and presumably involved in or interested in children’s health,” Dr. Hass said during his abstract presentation.

In the survey, 50% of noncoach parents and 56% of coaches reported they would follow the steps of return-to-play protocol, pulling the child out of play without a return until a medical approval. The remaining respondents would either allow the player to return if the player wanted to, have the player sit for 15 minutes and return when he or she felt okay, or only sit out the rest of the game or practice.

“These findings would suggest that 20% of the time on the field of play, you have a child who doesn’t have an advocate for brain safety,” Dr. Hass said during his presentation. The abstract notes that symptoms requiring emergency treatment “would not receive such urgency 25% to 50% of the time.”

The survey also asked about what respondents would do regarding each of several different symptoms following a head hit, using a 5-point scale for each symptom: no special care; let child rest at home; take the child to the doctor in a day or 2; call the doctor right away; or take the child to emergency care right away. Symptoms ranged from concussion symptoms, such as blurry vision, headache, walking unsteadily, vomiting, difficulty concentrating, and loss of consciousness, to unrelated concerns, such as sudden hunger or body aches.

Analysis of these answers and the question of whether the respondent would allow a child to sleep following a head hit revealed a two distinct groups, the researchers found.

“There’s clearly two different kinds of mentalities going on, the more cautious ‘take no chances’ group and the less cautious ‘watchful-waiting group,’ ” Dr. Hass said. Both groups are equally good at symptom discrimination, such as walking unsteadily or hearing a player say they have blurred vision or a headache, he said. But the watchful-waiters, 25% of the respondents and predominantly male, are less likely to follow return-to-play protocols.

“It’s lack of awareness of what the symptoms mean,” Dr. Hass said. “If the child is experiencing blurred vision, that could be a sign of concussion, and that’s a brain injury and something that requires medical attention.”

The study was funded by Dr. Hass’s employer, Nemours Center for Children’s Health Media. Dr. Hass reported no other disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
AAP: Return-to-play protocols for teen athletes often neglected
Display Headline
AAP: Return-to-play protocols for teen athletes often neglected
Legacy Keywords
Return to play protocols, teens, concussion, sports, athletes, aap
Legacy Keywords
Return to play protocols, teens, concussion, sports, athletes, aap
Sections
Article Source

AT THE AAP NATIONAL CONFERENCE

PURLs Copyright

Inside the Article

Vitals

Key clinical point:Only about half of coaches and parents in a convenience sample would follow return-to-play protocol after head hits in adolescent sports.

Major finding: 56% of coaches and 50% of noncoach parents would follow return-to-play protocols after a teen player’s head hit.

Data source: The findings are based on an online survey of 506 U.S. parents and coaches conducted between Jan. 13, 2015, and Feb. 11, 2015.

Disclosures: The study was funded by Dr. Hass’s employer, Nemours Center for Children’s Health Media. Dr. Hass reported no other disclosures.

AAOS Guidelines Sum-Up Prevention and Treatment Strategies for ACL Injuries

Article Type
Changed
Thu, 09/19/2019 - 13:30
Display Headline
AAOS Guidelines Sum-Up Prevention and Treatment Strategies for ACL Injuries

The American Academy of Orthopaedic Surgeons (AAOS) Board of Directors has approved Appropriate Use Criteria (AUCs) for anterior cruciate ligament (ACL) injury prevention programs and treatment, as well as rehabilitation and function checklists to help guide and ensure a safe return to sports for the treated athlete. The AUCs and checklists are available online at http://www.orthoguidelines.org/go/auc/.

“Both prevention and treatment of ACL injuries can be confusing given the diversity of injured patients—from skeletally immature youth to older adults, low- and high-risk athletes playing a variety of sports, and patients with and without arthritis,” said Robert Quinn, MD, AUC Section Leader on the Committee on Evidence-Based Quality and Value.

Robert Quinn, MD

Last year, the AAOS released the Clinical Practice Guideline (CPG) titled “Management of Anterior Cruciate Ligament Injuries.” The guideline recommends, with “moderate” supporting evidence, that reconstructive surgery occur within 5 months of an ACL injury to protect the knee joint. In addition, the CPG states that in young adults, ages 18 to 35, use of a patient's own tissue is preferable over donor tissue to repair an ACL tear.

The new “Appropriate Use Guideline for the Treatment of Anterior Cruciate Ligament Injuries" provides more specific guidance to orthopedic surgeons based on a patient’s various indications, including age, activity level, presence of advanced arthritis, and the status of the ACL tear. The guideline recommends specific next steps and procedures to ensure optimal recovery. Each treatment recommendation is ranked by level of appropriateness.

“The good news for patients and practitioners is that ACL reconstruction with autograft or allograft tissue is very successful,” said Dr. Quinn. “What these guidelines do is delineate, in a very easy-to-maneuver way, what the most appropriate treatments are in each category. It actually gives you the specific circumstances to plug in, and highlights where the evidence matches the recommendations.”

Most patients, especially high-level athletes, are eager to return to play following ACL surgery. However, there is a significant amount of post-surgical rehabilitation and functional recovery required before an athlete can resume sports play.

The new ACL Reconstruction Surgery “Return to Play” and “Postoperative Rehabilitation” checklists “are evidence-based lists on what should be going on before an athlete returns to play, and are constructed in a way that realistically sets expectations for what needs to be accomplished,” said Dr. Quinn.

The “Postoperative Rehabilitation” checklist outlines the post-surgical protocol, from early range of motion, weight bearing and closed and open chain quad and hamstring therapy, to optional rehabilitative bracing and neuromuscular stimulation.

According to the “Return to Play” checklist, a patient should feel confident that he or she can return to their sport of interest, and have been advised to participate in an ongoing ACL-prevention/movement-retraining program before resuming activities. In addition, the graft and surgical site have fully healed; and range of motion, balance, knee stability, strength and functional skills, have been restored.

For athletes involved in competitive or recreational athletics with no prior history of ACL reconstruction and no current history of ACL deficiency, the “Appropriate Use Guideline for ACL Injury Prevention Programs” provides advice regarding a supervised ACL injury prevention program, utilizing the best available scientific evidence and expert opinion.

“Injury prevention programs are very successful,” said Dr. Quinn. “This AUC helps alleviate some of the controversy about when these good options are most applicable.”

Like the treatment AUC, the injury prevention guidelines use patient indications and classifications. For example, sex, growth status, activity level, sports participation, and athlete risk can help determine whether or not a particular, supervised ACL injury prevention program is optimal.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
AJO, AAOS, Robert Quinn, ACL, ACL Injuries, Anterior Cruciate Ligament Injuries
Author and Disclosure Information

Author and Disclosure Information

The American Academy of Orthopaedic Surgeons (AAOS) Board of Directors has approved Appropriate Use Criteria (AUCs) for anterior cruciate ligament (ACL) injury prevention programs and treatment, as well as rehabilitation and function checklists to help guide and ensure a safe return to sports for the treated athlete. The AUCs and checklists are available online at http://www.orthoguidelines.org/go/auc/.

“Both prevention and treatment of ACL injuries can be confusing given the diversity of injured patients—from skeletally immature youth to older adults, low- and high-risk athletes playing a variety of sports, and patients with and without arthritis,” said Robert Quinn, MD, AUC Section Leader on the Committee on Evidence-Based Quality and Value.

Robert Quinn, MD

Last year, the AAOS released the Clinical Practice Guideline (CPG) titled “Management of Anterior Cruciate Ligament Injuries.” The guideline recommends, with “moderate” supporting evidence, that reconstructive surgery occur within 5 months of an ACL injury to protect the knee joint. In addition, the CPG states that in young adults, ages 18 to 35, use of a patient's own tissue is preferable over donor tissue to repair an ACL tear.

The new “Appropriate Use Guideline for the Treatment of Anterior Cruciate Ligament Injuries" provides more specific guidance to orthopedic surgeons based on a patient’s various indications, including age, activity level, presence of advanced arthritis, and the status of the ACL tear. The guideline recommends specific next steps and procedures to ensure optimal recovery. Each treatment recommendation is ranked by level of appropriateness.

“The good news for patients and practitioners is that ACL reconstruction with autograft or allograft tissue is very successful,” said Dr. Quinn. “What these guidelines do is delineate, in a very easy-to-maneuver way, what the most appropriate treatments are in each category. It actually gives you the specific circumstances to plug in, and highlights where the evidence matches the recommendations.”

Most patients, especially high-level athletes, are eager to return to play following ACL surgery. However, there is a significant amount of post-surgical rehabilitation and functional recovery required before an athlete can resume sports play.

The new ACL Reconstruction Surgery “Return to Play” and “Postoperative Rehabilitation” checklists “are evidence-based lists on what should be going on before an athlete returns to play, and are constructed in a way that realistically sets expectations for what needs to be accomplished,” said Dr. Quinn.

The “Postoperative Rehabilitation” checklist outlines the post-surgical protocol, from early range of motion, weight bearing and closed and open chain quad and hamstring therapy, to optional rehabilitative bracing and neuromuscular stimulation.

According to the “Return to Play” checklist, a patient should feel confident that he or she can return to their sport of interest, and have been advised to participate in an ongoing ACL-prevention/movement-retraining program before resuming activities. In addition, the graft and surgical site have fully healed; and range of motion, balance, knee stability, strength and functional skills, have been restored.

For athletes involved in competitive or recreational athletics with no prior history of ACL reconstruction and no current history of ACL deficiency, the “Appropriate Use Guideline for ACL Injury Prevention Programs” provides advice regarding a supervised ACL injury prevention program, utilizing the best available scientific evidence and expert opinion.

“Injury prevention programs are very successful,” said Dr. Quinn. “This AUC helps alleviate some of the controversy about when these good options are most applicable.”

Like the treatment AUC, the injury prevention guidelines use patient indications and classifications. For example, sex, growth status, activity level, sports participation, and athlete risk can help determine whether or not a particular, supervised ACL injury prevention program is optimal.

The American Academy of Orthopaedic Surgeons (AAOS) Board of Directors has approved Appropriate Use Criteria (AUCs) for anterior cruciate ligament (ACL) injury prevention programs and treatment, as well as rehabilitation and function checklists to help guide and ensure a safe return to sports for the treated athlete. The AUCs and checklists are available online at http://www.orthoguidelines.org/go/auc/.

“Both prevention and treatment of ACL injuries can be confusing given the diversity of injured patients—from skeletally immature youth to older adults, low- and high-risk athletes playing a variety of sports, and patients with and without arthritis,” said Robert Quinn, MD, AUC Section Leader on the Committee on Evidence-Based Quality and Value.

Robert Quinn, MD

Last year, the AAOS released the Clinical Practice Guideline (CPG) titled “Management of Anterior Cruciate Ligament Injuries.” The guideline recommends, with “moderate” supporting evidence, that reconstructive surgery occur within 5 months of an ACL injury to protect the knee joint. In addition, the CPG states that in young adults, ages 18 to 35, use of a patient's own tissue is preferable over donor tissue to repair an ACL tear.

The new “Appropriate Use Guideline for the Treatment of Anterior Cruciate Ligament Injuries" provides more specific guidance to orthopedic surgeons based on a patient’s various indications, including age, activity level, presence of advanced arthritis, and the status of the ACL tear. The guideline recommends specific next steps and procedures to ensure optimal recovery. Each treatment recommendation is ranked by level of appropriateness.

“The good news for patients and practitioners is that ACL reconstruction with autograft or allograft tissue is very successful,” said Dr. Quinn. “What these guidelines do is delineate, in a very easy-to-maneuver way, what the most appropriate treatments are in each category. It actually gives you the specific circumstances to plug in, and highlights where the evidence matches the recommendations.”

Most patients, especially high-level athletes, are eager to return to play following ACL surgery. However, there is a significant amount of post-surgical rehabilitation and functional recovery required before an athlete can resume sports play.

The new ACL Reconstruction Surgery “Return to Play” and “Postoperative Rehabilitation” checklists “are evidence-based lists on what should be going on before an athlete returns to play, and are constructed in a way that realistically sets expectations for what needs to be accomplished,” said Dr. Quinn.

The “Postoperative Rehabilitation” checklist outlines the post-surgical protocol, from early range of motion, weight bearing and closed and open chain quad and hamstring therapy, to optional rehabilitative bracing and neuromuscular stimulation.

According to the “Return to Play” checklist, a patient should feel confident that he or she can return to their sport of interest, and have been advised to participate in an ongoing ACL-prevention/movement-retraining program before resuming activities. In addition, the graft and surgical site have fully healed; and range of motion, balance, knee stability, strength and functional skills, have been restored.

For athletes involved in competitive or recreational athletics with no prior history of ACL reconstruction and no current history of ACL deficiency, the “Appropriate Use Guideline for ACL Injury Prevention Programs” provides advice regarding a supervised ACL injury prevention program, utilizing the best available scientific evidence and expert opinion.

“Injury prevention programs are very successful,” said Dr. Quinn. “This AUC helps alleviate some of the controversy about when these good options are most applicable.”

Like the treatment AUC, the injury prevention guidelines use patient indications and classifications. For example, sex, growth status, activity level, sports participation, and athlete risk can help determine whether or not a particular, supervised ACL injury prevention program is optimal.

References

References

Publications
Publications
Topics
Article Type
Display Headline
AAOS Guidelines Sum-Up Prevention and Treatment Strategies for ACL Injuries
Display Headline
AAOS Guidelines Sum-Up Prevention and Treatment Strategies for ACL Injuries
Legacy Keywords
AJO, AAOS, Robert Quinn, ACL, ACL Injuries, Anterior Cruciate Ligament Injuries
Legacy Keywords
AJO, AAOS, Robert Quinn, ACL, ACL Injuries, Anterior Cruciate Ligament Injuries
Article Source

PURLs Copyright

Inside the Article

Attitudes of Physicians in Training Regarding Reporting of Patient Safety Events

Article Type
Changed
Thu, 03/28/2019 - 15:18
Display Headline
Attitudes of Physicians in Training Regarding Reporting of Patient Safety Events

From the Department of Internal Medicine, Advocate Lutheran General Hospital, Park Ridge, IL.

 

Abstract

  • Objective: To understand the attitudes and experiences of physicians in training with regard to patient safety event reporting.
  • Methods: Residents and fellows in the department of internal medicine were surveyed using a questionnaire containing 5 closed-ended items. These items examined trainees’ attitudes, experiences and knowledge about safety event reporting and barriers to their reporting.
  • Results: 61% of 80 eligible trainees responded. The majority of residents understood that it is their responsibility to report safety events. Identified barriers to reporting were the complexity of the reporting system, lack of feedback after reporting safety events to gain knowledge of system advances, and reporting was not a priority in clinical workflow.
  • Conclusion: An inpatient safety and quality committee intends to develop solutions to the challenges faced by trainees’ in reporting patient safety events.

 

Nationwide, graduate medical education programs are changing to include a greater focus on quality improvement and patient safety [1,2]. This has been recognized as a priority by the Institute of Medicine, the Joint Commission, and the Accreditation Council for Graduate Medical Education (ACGME) [3–6]. Hospital safety event reporting systems have been implemented to improve patient safety. Despite national expectations and demonstrated benefits of reporting adverse events, most resident and attending physicians fail to understand the value, lack the skills to report, and do not participate in incident reporting [7–9].

Past attempts to increase awareness about patient safety reporting have resulted in minimal participation [10,11]. In relation to other health care providers, attending and resident physicians have the lowest rate of patient safety reporting [12]. Interventions aiming to improve reporting have had mixed results, with sustained improvement being a major challenge [13,14]. To advance our efforts to improve reporting of patient safety events as a means toward improving patient safety, we sought to understand the attitudes and beliefs of our physicians in training with regard to patient safety event reporting.

 

Methods

Setting

Our institution, a community teaching hospital located in Park Ridge, IL, began patient safety event reporting in 2006 by remote data entry using the Midas information management system (MidasPlus, Xerox, Tucson, AZ). In 2012, as part of the system-based practice ACGME competency, we asked residents enter at least 1 patient safety event for each rotation block. The events could be entered with identifying information or anonymously.

Quality Improvement Project

Given the national focus on patient safety and quality improvement, as well as our organizational goal of zero patient safety events by 2020, in 2014 we formed an inpatient safety and quality committee. This committee includes the medical director of patient safety, internal medicine program director, associate program director, chief resident, fellows, residents and attending physicians.  The committee was formed with the long-term objective of advancing patient safety and quality improvement efforts and to decrease preventable errors. As physicians in training are key frontline personnel, the committee decided to focus its initial short-term efforts on this group.

Questionnaire

To understand the magnitude and context of resident reporting behavior, we surveyed the residents and fellows in the department of internal medicine. The fellowships were in cardiology, gastroenterology, and hematology/oncology. The questionnaire we used contained 5 closed-ended items that examined trainees attitudes, experiences, and knowledge about incident reporting. The survey was distributed to the residents and fellows via SurveyMonkeyduring August 2014.

Results

A total of 80 eligible residents and fellows received the survey, and 49 completed it (61% response rate). Almost three-fourths of respondents indicated that they knew whose responsibility it is to report safety events. Over half indicated that they forget to make a report when the ward is busy. Over two-thirds indicated that they felt it was useful to discuss safety events. We anticipated that trainees would have concerns about disciplinary action and blame. We found that 28.5% worried about disciplinary action, but 40.8% did not. Asked whether they agree or disagree with the statement “Junior staff are often unfairly blamed for adverse incidents,” over half neither agreed nor disagreed; almost one-third disagreed, and only 16.3% agreed. Complete data on the responses are shown in Table 1.

When asked to outline reasons for not reporting incidents, participating residents and fellows identified the complexity of reporting system, lack of feedback, lack of updates about new system changes resulting from safety event reporting, 

reporting not a priority in clinical workflow, and fear of losing anonymity if safety event investigated as the most common reasons (Table 2).

Discussion

This pilot study demonstrated that resident and fellow physicians in the department of internal medicine at our institution understand the necessity of reporting and that it is their responsibility; however, it is not a priority during the busy clinical workflow on the wards. Other investigators have observed similar attitudes/behaviors among physicians across teaching hospitals in the United States [10,12]. In a study by Boike et al [15], despite positive attitudes among internal medicine residents regarding reporting, increased reporting could not be sustained after the initial increase.

Our finding that 71.4% felt it was useful to discuss safety events is similar to Kaldjian et al’s [16] finding that 70% of generalist physicians in teaching hospitals believed that discussing mistakes strengthens professional relationships. The physicians in that study indicated that they usually discuss their errors with colleagues.

Prior to the development of the inpatient safety committee, the institution had a basic resource in place for reporting events, but there were minimal contributions from resident and fellow physicians. It is likely that the higher rates of reporting by nursing, laboratory, and pharmacy services (data not presented) that were seen are due to a required reporting protocol that is part of the workflow.

Since the inception of the inpatient safety committee, we have started several projects to build upon the foundation of positive resident and fellow physician attitudes. Based on the input of resident and fellow physicians, who are the front-line agents of process improvement, the existing patient safety reporting method was revised and reorganized. Previously, an online standardized patient safety event form was accessible after several click throughs on the institution’s homepage. This form was confusing and laborious to complete by residents and fellows, who were already operating on thin margins of spare time. The online reporting form was moved to the homepage for easy access and a free text option was enabled.

Another barrier to reporting was access to available computers. As such, the committee instituted a phone hotline reporting system. Instead of residents entering events using the Midas information management system, they now are able to call an in-house line and leave an anonymous voicemail to report safety events. Both the online and phone hotline reporting systems are integrated into a central database.

Lastly, resident and fellow physician education curricula were developed to instruct on the need to report patient safety events. Time is allotted at every monthly resident business meeting to discuss reportable patient safety events and offer feedback about concerns. In addition, the director of patient safety sends weekly safety updates, which are emailed to all faculty, residents, and fellows. These include de-identified safety event reports and any organizational and system improvements made in response to these events. Additionally, a mock root analysis takes place each quarter in which the patient safety director reviews a mock case with trainees to identify root causes and system failures. The committee has committed to transparency of reporting patient safety events as means to track the results of our efforts and interventions [17].

We plan to resurvey the resident and fellow physicians to reassess stability or changes in attitudes as a result of these physician-focused improvements. A more systematic analysis of temporal trends in reporting and comparisons across residency programs within our health system is being designed.

Conclusion

Reporting patient safety events should not be seen as a cumbersome task in an already busy clinical workday. We intend to develop scalable solutions that take into account the challenges faced by physicians in training. As the institution strives to become a high-reliability organization with a goal of zero serious patient safety events by 2020, we hope to share the lessons from our quality improvement efforts with other learning organizations.

 

Acknowledgements: The authors thank Suela Sulo, PhD, for manuscript review and editing.

Corresponding author: Jill Patton, DO, Advocate Lutheran General Hospital, 1775 Dempster St., Park Ridge, IL 60068, [email protected].

Financial disclosures: None.

References

1. Boonyasai RT, Windish DM, Chakraborti C, et al. Effectiveness of teaching quality improvement to clinicians: a systematic review. JAMA 2007;298:1023–37.

2. Wong BM, Etchells EE, Kuper A, et al. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med 2010;85:1425–39.

3. Leape LL, Berwick DM. Five years after To Err Is Human: what have we learned? JAMA. 2005;293:2384–90.

4. Kohn LT, Corrigan JM, Donaldson MS. To err is human: Building a safer health system. Washington, DC: National Academy Press; 1999.

5. Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007;29:648–54.

6. ACGME. Common program requirements. Available at www.acgme.org.

7. Kaldjian LC, Jones EW, Wu BJ, et al. Reporting medical errors to improve patient safety: a survey of physicians in teaching hospitals. Arch Intern Med 2008;168:40–6.

8. Kaldjian LC, Jones EW, Rosenthal GE, et al. An empirically derived taxonomy of factors affecting physicians’ willingness to disclose medical errors. J Gen Intern Med 2006;21:942–8.

9. White AA, Gallagher TH, Krauss MJ, et al. The attitudes and experiences of trainees regarding disclosing medical errors to patients. Acad Med 2008;83:250–6.

10. Farley DO, Haviland A, Champagne S, et al. Adverse-event-reporting practices by US hospitals: results of a national survey. Qual Saf Health Care 2008;17:416–23.

11. Schectman JM, Plews-Ogan ML. Physician perception of hospital safety and barriers to incident reporting. Jt Comm J Qual Patient Saf 2006;32:337–43.

12. Milch CE, Salem DN, Pauker SG, et al. Voluntary electronic reporting of medical errors and adverse events. An analysis of 92,547 reports from 26 acute care hospitals. J Gen Intern Med 2006;21:165–70.

13. Madigosky WS, Headrick LA, Nelson K, et al. Changing and sustaining medical students’ knowledge, skills, and attitudes about patient safety and medical fallibility. Acad Med 2006;81:94–101.

14. Jericho BG, Tassone RF, Centomani NM, et al. An assessment of an educational intervention on resident physician attitudes, knowledge, and skills related to adverse event reporting. J Grad Med Educ 2010;2:188–94.

15. Boike JR, Bortman JS, Radosta JM, et al. Patient safety event reporting expectation: does it influence residents’ attitudes and reporting behaviors? J Patient Saf 2013;9:59–67.

16.Kaldjian LC, Forman-Hoffman VL, Jones EW, et al. Do faculty and resident physicians discuss their medical errors? J Med Ethics 2008;34:717–22.

17. Willeumier D. Advocate health care: a systemwide approach to quality and safety. Jt Comm J Qual Saf 2004;30:559–66.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Topics
Sections

From the Department of Internal Medicine, Advocate Lutheran General Hospital, Park Ridge, IL.

 

Abstract

  • Objective: To understand the attitudes and experiences of physicians in training with regard to patient safety event reporting.
  • Methods: Residents and fellows in the department of internal medicine were surveyed using a questionnaire containing 5 closed-ended items. These items examined trainees’ attitudes, experiences and knowledge about safety event reporting and barriers to their reporting.
  • Results: 61% of 80 eligible trainees responded. The majority of residents understood that it is their responsibility to report safety events. Identified barriers to reporting were the complexity of the reporting system, lack of feedback after reporting safety events to gain knowledge of system advances, and reporting was not a priority in clinical workflow.
  • Conclusion: An inpatient safety and quality committee intends to develop solutions to the challenges faced by trainees’ in reporting patient safety events.

 

Nationwide, graduate medical education programs are changing to include a greater focus on quality improvement and patient safety [1,2]. This has been recognized as a priority by the Institute of Medicine, the Joint Commission, and the Accreditation Council for Graduate Medical Education (ACGME) [3–6]. Hospital safety event reporting systems have been implemented to improve patient safety. Despite national expectations and demonstrated benefits of reporting adverse events, most resident and attending physicians fail to understand the value, lack the skills to report, and do not participate in incident reporting [7–9].

Past attempts to increase awareness about patient safety reporting have resulted in minimal participation [10,11]. In relation to other health care providers, attending and resident physicians have the lowest rate of patient safety reporting [12]. Interventions aiming to improve reporting have had mixed results, with sustained improvement being a major challenge [13,14]. To advance our efforts to improve reporting of patient safety events as a means toward improving patient safety, we sought to understand the attitudes and beliefs of our physicians in training with regard to patient safety event reporting.

 

Methods

Setting

Our institution, a community teaching hospital located in Park Ridge, IL, began patient safety event reporting in 2006 by remote data entry using the Midas information management system (MidasPlus, Xerox, Tucson, AZ). In 2012, as part of the system-based practice ACGME competency, we asked residents enter at least 1 patient safety event for each rotation block. The events could be entered with identifying information or anonymously.

Quality Improvement Project

Given the national focus on patient safety and quality improvement, as well as our organizational goal of zero patient safety events by 2020, in 2014 we formed an inpatient safety and quality committee. This committee includes the medical director of patient safety, internal medicine program director, associate program director, chief resident, fellows, residents and attending physicians.  The committee was formed with the long-term objective of advancing patient safety and quality improvement efforts and to decrease preventable errors. As physicians in training are key frontline personnel, the committee decided to focus its initial short-term efforts on this group.

Questionnaire

To understand the magnitude and context of resident reporting behavior, we surveyed the residents and fellows in the department of internal medicine. The fellowships were in cardiology, gastroenterology, and hematology/oncology. The questionnaire we used contained 5 closed-ended items that examined trainees attitudes, experiences, and knowledge about incident reporting. The survey was distributed to the residents and fellows via SurveyMonkeyduring August 2014.

Results

A total of 80 eligible residents and fellows received the survey, and 49 completed it (61% response rate). Almost three-fourths of respondents indicated that they knew whose responsibility it is to report safety events. Over half indicated that they forget to make a report when the ward is busy. Over two-thirds indicated that they felt it was useful to discuss safety events. We anticipated that trainees would have concerns about disciplinary action and blame. We found that 28.5% worried about disciplinary action, but 40.8% did not. Asked whether they agree or disagree with the statement “Junior staff are often unfairly blamed for adverse incidents,” over half neither agreed nor disagreed; almost one-third disagreed, and only 16.3% agreed. Complete data on the responses are shown in Table 1.

When asked to outline reasons for not reporting incidents, participating residents and fellows identified the complexity of reporting system, lack of feedback, lack of updates about new system changes resulting from safety event reporting, 

reporting not a priority in clinical workflow, and fear of losing anonymity if safety event investigated as the most common reasons (Table 2).

Discussion

This pilot study demonstrated that resident and fellow physicians in the department of internal medicine at our institution understand the necessity of reporting and that it is their responsibility; however, it is not a priority during the busy clinical workflow on the wards. Other investigators have observed similar attitudes/behaviors among physicians across teaching hospitals in the United States [10,12]. In a study by Boike et al [15], despite positive attitudes among internal medicine residents regarding reporting, increased reporting could not be sustained after the initial increase.

Our finding that 71.4% felt it was useful to discuss safety events is similar to Kaldjian et al’s [16] finding that 70% of generalist physicians in teaching hospitals believed that discussing mistakes strengthens professional relationships. The physicians in that study indicated that they usually discuss their errors with colleagues.

Prior to the development of the inpatient safety committee, the institution had a basic resource in place for reporting events, but there were minimal contributions from resident and fellow physicians. It is likely that the higher rates of reporting by nursing, laboratory, and pharmacy services (data not presented) that were seen are due to a required reporting protocol that is part of the workflow.

Since the inception of the inpatient safety committee, we have started several projects to build upon the foundation of positive resident and fellow physician attitudes. Based on the input of resident and fellow physicians, who are the front-line agents of process improvement, the existing patient safety reporting method was revised and reorganized. Previously, an online standardized patient safety event form was accessible after several click throughs on the institution’s homepage. This form was confusing and laborious to complete by residents and fellows, who were already operating on thin margins of spare time. The online reporting form was moved to the homepage for easy access and a free text option was enabled.

Another barrier to reporting was access to available computers. As such, the committee instituted a phone hotline reporting system. Instead of residents entering events using the Midas information management system, they now are able to call an in-house line and leave an anonymous voicemail to report safety events. Both the online and phone hotline reporting systems are integrated into a central database.

Lastly, resident and fellow physician education curricula were developed to instruct on the need to report patient safety events. Time is allotted at every monthly resident business meeting to discuss reportable patient safety events and offer feedback about concerns. In addition, the director of patient safety sends weekly safety updates, which are emailed to all faculty, residents, and fellows. These include de-identified safety event reports and any organizational and system improvements made in response to these events. Additionally, a mock root analysis takes place each quarter in which the patient safety director reviews a mock case with trainees to identify root causes and system failures. The committee has committed to transparency of reporting patient safety events as means to track the results of our efforts and interventions [17].

We plan to resurvey the resident and fellow physicians to reassess stability or changes in attitudes as a result of these physician-focused improvements. A more systematic analysis of temporal trends in reporting and comparisons across residency programs within our health system is being designed.

Conclusion

Reporting patient safety events should not be seen as a cumbersome task in an already busy clinical workday. We intend to develop scalable solutions that take into account the challenges faced by physicians in training. As the institution strives to become a high-reliability organization with a goal of zero serious patient safety events by 2020, we hope to share the lessons from our quality improvement efforts with other learning organizations.

 

Acknowledgements: The authors thank Suela Sulo, PhD, for manuscript review and editing.

Corresponding author: Jill Patton, DO, Advocate Lutheran General Hospital, 1775 Dempster St., Park Ridge, IL 60068, [email protected].

Financial disclosures: None.

From the Department of Internal Medicine, Advocate Lutheran General Hospital, Park Ridge, IL.

 

Abstract

  • Objective: To understand the attitudes and experiences of physicians in training with regard to patient safety event reporting.
  • Methods: Residents and fellows in the department of internal medicine were surveyed using a questionnaire containing 5 closed-ended items. These items examined trainees’ attitudes, experiences and knowledge about safety event reporting and barriers to their reporting.
  • Results: 61% of 80 eligible trainees responded. The majority of residents understood that it is their responsibility to report safety events. Identified barriers to reporting were the complexity of the reporting system, lack of feedback after reporting safety events to gain knowledge of system advances, and reporting was not a priority in clinical workflow.
  • Conclusion: An inpatient safety and quality committee intends to develop solutions to the challenges faced by trainees’ in reporting patient safety events.

 

Nationwide, graduate medical education programs are changing to include a greater focus on quality improvement and patient safety [1,2]. This has been recognized as a priority by the Institute of Medicine, the Joint Commission, and the Accreditation Council for Graduate Medical Education (ACGME) [3–6]. Hospital safety event reporting systems have been implemented to improve patient safety. Despite national expectations and demonstrated benefits of reporting adverse events, most resident and attending physicians fail to understand the value, lack the skills to report, and do not participate in incident reporting [7–9].

Past attempts to increase awareness about patient safety reporting have resulted in minimal participation [10,11]. In relation to other health care providers, attending and resident physicians have the lowest rate of patient safety reporting [12]. Interventions aiming to improve reporting have had mixed results, with sustained improvement being a major challenge [13,14]. To advance our efforts to improve reporting of patient safety events as a means toward improving patient safety, we sought to understand the attitudes and beliefs of our physicians in training with regard to patient safety event reporting.

 

Methods

Setting

Our institution, a community teaching hospital located in Park Ridge, IL, began patient safety event reporting in 2006 by remote data entry using the Midas information management system (MidasPlus, Xerox, Tucson, AZ). In 2012, as part of the system-based practice ACGME competency, we asked residents enter at least 1 patient safety event for each rotation block. The events could be entered with identifying information or anonymously.

Quality Improvement Project

Given the national focus on patient safety and quality improvement, as well as our organizational goal of zero patient safety events by 2020, in 2014 we formed an inpatient safety and quality committee. This committee includes the medical director of patient safety, internal medicine program director, associate program director, chief resident, fellows, residents and attending physicians.  The committee was formed with the long-term objective of advancing patient safety and quality improvement efforts and to decrease preventable errors. As physicians in training are key frontline personnel, the committee decided to focus its initial short-term efforts on this group.

Questionnaire

To understand the magnitude and context of resident reporting behavior, we surveyed the residents and fellows in the department of internal medicine. The fellowships were in cardiology, gastroenterology, and hematology/oncology. The questionnaire we used contained 5 closed-ended items that examined trainees attitudes, experiences, and knowledge about incident reporting. The survey was distributed to the residents and fellows via SurveyMonkeyduring August 2014.

Results

A total of 80 eligible residents and fellows received the survey, and 49 completed it (61% response rate). Almost three-fourths of respondents indicated that they knew whose responsibility it is to report safety events. Over half indicated that they forget to make a report when the ward is busy. Over two-thirds indicated that they felt it was useful to discuss safety events. We anticipated that trainees would have concerns about disciplinary action and blame. We found that 28.5% worried about disciplinary action, but 40.8% did not. Asked whether they agree or disagree with the statement “Junior staff are often unfairly blamed for adverse incidents,” over half neither agreed nor disagreed; almost one-third disagreed, and only 16.3% agreed. Complete data on the responses are shown in Table 1.

When asked to outline reasons for not reporting incidents, participating residents and fellows identified the complexity of reporting system, lack of feedback, lack of updates about new system changes resulting from safety event reporting, 

reporting not a priority in clinical workflow, and fear of losing anonymity if safety event investigated as the most common reasons (Table 2).

Discussion

This pilot study demonstrated that resident and fellow physicians in the department of internal medicine at our institution understand the necessity of reporting and that it is their responsibility; however, it is not a priority during the busy clinical workflow on the wards. Other investigators have observed similar attitudes/behaviors among physicians across teaching hospitals in the United States [10,12]. In a study by Boike et al [15], despite positive attitudes among internal medicine residents regarding reporting, increased reporting could not be sustained after the initial increase.

Our finding that 71.4% felt it was useful to discuss safety events is similar to Kaldjian et al’s [16] finding that 70% of generalist physicians in teaching hospitals believed that discussing mistakes strengthens professional relationships. The physicians in that study indicated that they usually discuss their errors with colleagues.

Prior to the development of the inpatient safety committee, the institution had a basic resource in place for reporting events, but there were minimal contributions from resident and fellow physicians. It is likely that the higher rates of reporting by nursing, laboratory, and pharmacy services (data not presented) that were seen are due to a required reporting protocol that is part of the workflow.

Since the inception of the inpatient safety committee, we have started several projects to build upon the foundation of positive resident and fellow physician attitudes. Based on the input of resident and fellow physicians, who are the front-line agents of process improvement, the existing patient safety reporting method was revised and reorganized. Previously, an online standardized patient safety event form was accessible after several click throughs on the institution’s homepage. This form was confusing and laborious to complete by residents and fellows, who were already operating on thin margins of spare time. The online reporting form was moved to the homepage for easy access and a free text option was enabled.

Another barrier to reporting was access to available computers. As such, the committee instituted a phone hotline reporting system. Instead of residents entering events using the Midas information management system, they now are able to call an in-house line and leave an anonymous voicemail to report safety events. Both the online and phone hotline reporting systems are integrated into a central database.

Lastly, resident and fellow physician education curricula were developed to instruct on the need to report patient safety events. Time is allotted at every monthly resident business meeting to discuss reportable patient safety events and offer feedback about concerns. In addition, the director of patient safety sends weekly safety updates, which are emailed to all faculty, residents, and fellows. These include de-identified safety event reports and any organizational and system improvements made in response to these events. Additionally, a mock root analysis takes place each quarter in which the patient safety director reviews a mock case with trainees to identify root causes and system failures. The committee has committed to transparency of reporting patient safety events as means to track the results of our efforts and interventions [17].

We plan to resurvey the resident and fellow physicians to reassess stability or changes in attitudes as a result of these physician-focused improvements. A more systematic analysis of temporal trends in reporting and comparisons across residency programs within our health system is being designed.

Conclusion

Reporting patient safety events should not be seen as a cumbersome task in an already busy clinical workday. We intend to develop scalable solutions that take into account the challenges faced by physicians in training. As the institution strives to become a high-reliability organization with a goal of zero serious patient safety events by 2020, we hope to share the lessons from our quality improvement efforts with other learning organizations.

 

Acknowledgements: The authors thank Suela Sulo, PhD, for manuscript review and editing.

Corresponding author: Jill Patton, DO, Advocate Lutheran General Hospital, 1775 Dempster St., Park Ridge, IL 60068, [email protected].

Financial disclosures: None.

References

1. Boonyasai RT, Windish DM, Chakraborti C, et al. Effectiveness of teaching quality improvement to clinicians: a systematic review. JAMA 2007;298:1023–37.

2. Wong BM, Etchells EE, Kuper A, et al. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med 2010;85:1425–39.

3. Leape LL, Berwick DM. Five years after To Err Is Human: what have we learned? JAMA. 2005;293:2384–90.

4. Kohn LT, Corrigan JM, Donaldson MS. To err is human: Building a safer health system. Washington, DC: National Academy Press; 1999.

5. Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007;29:648–54.

6. ACGME. Common program requirements. Available at www.acgme.org.

7. Kaldjian LC, Jones EW, Wu BJ, et al. Reporting medical errors to improve patient safety: a survey of physicians in teaching hospitals. Arch Intern Med 2008;168:40–6.

8. Kaldjian LC, Jones EW, Rosenthal GE, et al. An empirically derived taxonomy of factors affecting physicians’ willingness to disclose medical errors. J Gen Intern Med 2006;21:942–8.

9. White AA, Gallagher TH, Krauss MJ, et al. The attitudes and experiences of trainees regarding disclosing medical errors to patients. Acad Med 2008;83:250–6.

10. Farley DO, Haviland A, Champagne S, et al. Adverse-event-reporting practices by US hospitals: results of a national survey. Qual Saf Health Care 2008;17:416–23.

11. Schectman JM, Plews-Ogan ML. Physician perception of hospital safety and barriers to incident reporting. Jt Comm J Qual Patient Saf 2006;32:337–43.

12. Milch CE, Salem DN, Pauker SG, et al. Voluntary electronic reporting of medical errors and adverse events. An analysis of 92,547 reports from 26 acute care hospitals. J Gen Intern Med 2006;21:165–70.

13. Madigosky WS, Headrick LA, Nelson K, et al. Changing and sustaining medical students’ knowledge, skills, and attitudes about patient safety and medical fallibility. Acad Med 2006;81:94–101.

14. Jericho BG, Tassone RF, Centomani NM, et al. An assessment of an educational intervention on resident physician attitudes, knowledge, and skills related to adverse event reporting. J Grad Med Educ 2010;2:188–94.

15. Boike JR, Bortman JS, Radosta JM, et al. Patient safety event reporting expectation: does it influence residents’ attitudes and reporting behaviors? J Patient Saf 2013;9:59–67.

16.Kaldjian LC, Forman-Hoffman VL, Jones EW, et al. Do faculty and resident physicians discuss their medical errors? J Med Ethics 2008;34:717–22.

17. Willeumier D. Advocate health care: a systemwide approach to quality and safety. Jt Comm J Qual Saf 2004;30:559–66.

References

1. Boonyasai RT, Windish DM, Chakraborti C, et al. Effectiveness of teaching quality improvement to clinicians: a systematic review. JAMA 2007;298:1023–37.

2. Wong BM, Etchells EE, Kuper A, et al. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med 2010;85:1425–39.

3. Leape LL, Berwick DM. Five years after To Err Is Human: what have we learned? JAMA. 2005;293:2384–90.

4. Kohn LT, Corrigan JM, Donaldson MS. To err is human: Building a safer health system. Washington, DC: National Academy Press; 1999.

5. Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007;29:648–54.

6. ACGME. Common program requirements. Available at www.acgme.org.

7. Kaldjian LC, Jones EW, Wu BJ, et al. Reporting medical errors to improve patient safety: a survey of physicians in teaching hospitals. Arch Intern Med 2008;168:40–6.

8. Kaldjian LC, Jones EW, Rosenthal GE, et al. An empirically derived taxonomy of factors affecting physicians’ willingness to disclose medical errors. J Gen Intern Med 2006;21:942–8.

9. White AA, Gallagher TH, Krauss MJ, et al. The attitudes and experiences of trainees regarding disclosing medical errors to patients. Acad Med 2008;83:250–6.

10. Farley DO, Haviland A, Champagne S, et al. Adverse-event-reporting practices by US hospitals: results of a national survey. Qual Saf Health Care 2008;17:416–23.

11. Schectman JM, Plews-Ogan ML. Physician perception of hospital safety and barriers to incident reporting. Jt Comm J Qual Patient Saf 2006;32:337–43.

12. Milch CE, Salem DN, Pauker SG, et al. Voluntary electronic reporting of medical errors and adverse events. An analysis of 92,547 reports from 26 acute care hospitals. J Gen Intern Med 2006;21:165–70.

13. Madigosky WS, Headrick LA, Nelson K, et al. Changing and sustaining medical students’ knowledge, skills, and attitudes about patient safety and medical fallibility. Acad Med 2006;81:94–101.

14. Jericho BG, Tassone RF, Centomani NM, et al. An assessment of an educational intervention on resident physician attitudes, knowledge, and skills related to adverse event reporting. J Grad Med Educ 2010;2:188–94.

15. Boike JR, Bortman JS, Radosta JM, et al. Patient safety event reporting expectation: does it influence residents’ attitudes and reporting behaviors? J Patient Saf 2013;9:59–67.

16.Kaldjian LC, Forman-Hoffman VL, Jones EW, et al. Do faculty and resident physicians discuss their medical errors? J Med Ethics 2008;34:717–22.

17. Willeumier D. Advocate health care: a systemwide approach to quality and safety. Jt Comm J Qual Saf 2004;30:559–66.

Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Issue
Journal of Clinical Outcomes Management - NOVEMBER 2015, VOL. 22, NO. 11
Publications
Publications
Topics
Article Type
Display Headline
Attitudes of Physicians in Training Regarding Reporting of Patient Safety Events
Display Headline
Attitudes of Physicians in Training Regarding Reporting of Patient Safety Events
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Women usually given dabigatran in lower dose

Article Type
Changed
Fri, 01/18/2019 - 15:21
Display Headline
Women usually given dabigatran in lower dose

In real-world practice, women who are prescribed dabigatran for atrial fibrillation (AF) are usually given the lower dose of the drug, even though the higher dose appears to be more protective against stroke, according to Canadian database analysis published online Oct. 27 in Circulation: Cardiovascular Quality and Outcomes.

In RE-LY, the main randomized clinical trial showing that dabigatran is more effective at preventing stroke and provoked fewer bleeding episodes than warfarin, only 37% of the participants were women (N Engl J Med. 2009 Sep 17; 361[12]1139-51).This raises the question of whether the study’s results are truly applicable to women. In addition, the women in that trial showed plasma concentrations of dabigatran that were 30% higher than those in men, suggesting that the drug’s safety profile may differ between women and men. However, there was no mention of sex differences related to outcomes, said Meytal Avgil Tsadok, Ph.D., of the division of clinical epidemiology, McGill University Health Center, Montreal, and her associates.

©Graça Victoria/Thinkstockphotos.com

To examine sex-based differences in prescribing patterns in real-world practice, the investigators performed a population-based cohort study among 631,110 residents of Quebec who were discharged from the hospital with either a primary or a secondary diagnosis of AF during a 14-year period. They identified 15,918 dabigatran users and matched them for comorbidity, age at AF diagnosis, and date of first prescription for anticoagulants with 47,192 warfarin users (control subjects). The 31,786 women and 31,324 men participating in this study were followed for a median of 1.3 years (range, 0-3.2 years) for the development of stroke/TIA, bleeding events, or hospitalization for MI.

The researchers found that dabigatran use differed markedly between women and men. Men were prescribed the lower dose (110 mg) in nearly equal numbers with the higher dose (150 mg) of dabigatran, but women were prescribed the lower dose (64.8%) much more often than the higher dose (35.2%). In a further analysis of the data, women were much more likely than men to fill prescriptions of the lower dose (odds ratio, 1.35). This was true even though women, but not men, showed a trend toward a lower incidence of stroke when prescribed the higher dose of dabigatran, compared with warfarin.

These prescribing practices remained consistent even when the participants were categorized according to age. More women than men used low-dose dabigatran whether they were younger than 75 years of age (22.8% vs. 18.5%) or older than 75 years (83.5% vs. 76.0%). These findings show that, regardless of patient age or comorbidities, “women have 35% higher chances to be prescribed a lower dabigatran dose than men, although women have a higher baseline risk for stroke,” Dr. Tsadok and her associates said (Circ Cardiovasc Qual Outcomes. 2015 Oct 27 [doi: 10.1161/circoutcomes.114.001398]).

The reason for this discrepancy is not yet known. A similar pattern of prescribing was noted in a Danish population-based cohort study. It’s possible that clinicians perceive women as frailer patients than men, “so they tend to be more concerned about safety and, therefore, prescribe women with a lower dose, compromising efficacy,” the investigators said.

Their study was limited in that follow-up was relatively short at approximately 1 year. It is therefore possible that they underestimated the risks of stroke/TIA, bleeding events, or MI hospitalization, Dr. Tsadok and her associates added.

This study was supported by the Canadian Institutes of Health Research. Dr. Tsadok and her associates reported having no relevant financial disclosures.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

In real-world practice, women who are prescribed dabigatran for atrial fibrillation (AF) are usually given the lower dose of the drug, even though the higher dose appears to be more protective against stroke, according to Canadian database analysis published online Oct. 27 in Circulation: Cardiovascular Quality and Outcomes.

In RE-LY, the main randomized clinical trial showing that dabigatran is more effective at preventing stroke and provoked fewer bleeding episodes than warfarin, only 37% of the participants were women (N Engl J Med. 2009 Sep 17; 361[12]1139-51).This raises the question of whether the study’s results are truly applicable to women. In addition, the women in that trial showed plasma concentrations of dabigatran that were 30% higher than those in men, suggesting that the drug’s safety profile may differ between women and men. However, there was no mention of sex differences related to outcomes, said Meytal Avgil Tsadok, Ph.D., of the division of clinical epidemiology, McGill University Health Center, Montreal, and her associates.

©Graça Victoria/Thinkstockphotos.com

To examine sex-based differences in prescribing patterns in real-world practice, the investigators performed a population-based cohort study among 631,110 residents of Quebec who were discharged from the hospital with either a primary or a secondary diagnosis of AF during a 14-year period. They identified 15,918 dabigatran users and matched them for comorbidity, age at AF diagnosis, and date of first prescription for anticoagulants with 47,192 warfarin users (control subjects). The 31,786 women and 31,324 men participating in this study were followed for a median of 1.3 years (range, 0-3.2 years) for the development of stroke/TIA, bleeding events, or hospitalization for MI.

The researchers found that dabigatran use differed markedly between women and men. Men were prescribed the lower dose (110 mg) in nearly equal numbers with the higher dose (150 mg) of dabigatran, but women were prescribed the lower dose (64.8%) much more often than the higher dose (35.2%). In a further analysis of the data, women were much more likely than men to fill prescriptions of the lower dose (odds ratio, 1.35). This was true even though women, but not men, showed a trend toward a lower incidence of stroke when prescribed the higher dose of dabigatran, compared with warfarin.

These prescribing practices remained consistent even when the participants were categorized according to age. More women than men used low-dose dabigatran whether they were younger than 75 years of age (22.8% vs. 18.5%) or older than 75 years (83.5% vs. 76.0%). These findings show that, regardless of patient age or comorbidities, “women have 35% higher chances to be prescribed a lower dabigatran dose than men, although women have a higher baseline risk for stroke,” Dr. Tsadok and her associates said (Circ Cardiovasc Qual Outcomes. 2015 Oct 27 [doi: 10.1161/circoutcomes.114.001398]).

The reason for this discrepancy is not yet known. A similar pattern of prescribing was noted in a Danish population-based cohort study. It’s possible that clinicians perceive women as frailer patients than men, “so they tend to be more concerned about safety and, therefore, prescribe women with a lower dose, compromising efficacy,” the investigators said.

Their study was limited in that follow-up was relatively short at approximately 1 year. It is therefore possible that they underestimated the risks of stroke/TIA, bleeding events, or MI hospitalization, Dr. Tsadok and her associates added.

This study was supported by the Canadian Institutes of Health Research. Dr. Tsadok and her associates reported having no relevant financial disclosures.

In real-world practice, women who are prescribed dabigatran for atrial fibrillation (AF) are usually given the lower dose of the drug, even though the higher dose appears to be more protective against stroke, according to Canadian database analysis published online Oct. 27 in Circulation: Cardiovascular Quality and Outcomes.

In RE-LY, the main randomized clinical trial showing that dabigatran is more effective at preventing stroke and provoked fewer bleeding episodes than warfarin, only 37% of the participants were women (N Engl J Med. 2009 Sep 17; 361[12]1139-51).This raises the question of whether the study’s results are truly applicable to women. In addition, the women in that trial showed plasma concentrations of dabigatran that were 30% higher than those in men, suggesting that the drug’s safety profile may differ between women and men. However, there was no mention of sex differences related to outcomes, said Meytal Avgil Tsadok, Ph.D., of the division of clinical epidemiology, McGill University Health Center, Montreal, and her associates.

©Graça Victoria/Thinkstockphotos.com

To examine sex-based differences in prescribing patterns in real-world practice, the investigators performed a population-based cohort study among 631,110 residents of Quebec who were discharged from the hospital with either a primary or a secondary diagnosis of AF during a 14-year period. They identified 15,918 dabigatran users and matched them for comorbidity, age at AF diagnosis, and date of first prescription for anticoagulants with 47,192 warfarin users (control subjects). The 31,786 women and 31,324 men participating in this study were followed for a median of 1.3 years (range, 0-3.2 years) for the development of stroke/TIA, bleeding events, or hospitalization for MI.

The researchers found that dabigatran use differed markedly between women and men. Men were prescribed the lower dose (110 mg) in nearly equal numbers with the higher dose (150 mg) of dabigatran, but women were prescribed the lower dose (64.8%) much more often than the higher dose (35.2%). In a further analysis of the data, women were much more likely than men to fill prescriptions of the lower dose (odds ratio, 1.35). This was true even though women, but not men, showed a trend toward a lower incidence of stroke when prescribed the higher dose of dabigatran, compared with warfarin.

These prescribing practices remained consistent even when the participants were categorized according to age. More women than men used low-dose dabigatran whether they were younger than 75 years of age (22.8% vs. 18.5%) or older than 75 years (83.5% vs. 76.0%). These findings show that, regardless of patient age or comorbidities, “women have 35% higher chances to be prescribed a lower dabigatran dose than men, although women have a higher baseline risk for stroke,” Dr. Tsadok and her associates said (Circ Cardiovasc Qual Outcomes. 2015 Oct 27 [doi: 10.1161/circoutcomes.114.001398]).

The reason for this discrepancy is not yet known. A similar pattern of prescribing was noted in a Danish population-based cohort study. It’s possible that clinicians perceive women as frailer patients than men, “so they tend to be more concerned about safety and, therefore, prescribe women with a lower dose, compromising efficacy,” the investigators said.

Their study was limited in that follow-up was relatively short at approximately 1 year. It is therefore possible that they underestimated the risks of stroke/TIA, bleeding events, or MI hospitalization, Dr. Tsadok and her associates added.

This study was supported by the Canadian Institutes of Health Research. Dr. Tsadok and her associates reported having no relevant financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Women usually given dabigatran in lower dose
Display Headline
Women usually given dabigatran in lower dose
Article Source

FROM CIRCULATION: CARDIOVASCULAR QUALITY AND OUTCOMES

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Women prescribed dabigatran for AF are usually given the lower dose of the drug, for unknown reasons.

Major finding: Men were prescribed the lower dose (110 mg) in nearly equal numbers with the higher dose (150 mg) of dabigatran, but women were prescribed the lower dose (64.8%) much more often than the higher dose (35.2%).

Data source: An analysis of prescribing patterns in a population-based cohort of 31,786 women and 31,324 men who had AF living in Quebec.

Disclosures: This study was supported by the Canadian Institutes of Health Research. Dr. Tsadok and her associates reported having no relevant financial disclosures.