Affiliations
Department of Medicine, University of Ottawa, and Ottawa Health Research Institute, Ottawa, Ontario, Canada
Given name(s)
Sumant R.
Family name
Ranji
Degrees
MD

Diving Into Diagnostic Uncertainty: Strategies to Mitigate Cognitive Load: In Reference to: “Focused Ethnography of Diagnosis in Academic Medical Centers”

Article Type
Changed
Sun, 12/02/2018 - 15:10

We read the article by Chopra et al. “Focused Ethnography of Diagnosis in Academic Medical Centers” with great interest.1 This ethnographic study provided valuable insights into possible interventions to encourage diagnostic thinking.

Duty hour regulations and the resulting increase in handoffs have shifted the social experience of diagnosis from one that occurs within teams to one that often occurs between teams during handoffs between providers.2 While the article highlighted barriers to diagnosis, including distractions and time pressure, it did not explicitly discuss cognitive load theory. Cognitive load theory is an educational framework that has been described by Young et al.3 to improve instructions in the handoff process. These investigators showed how progressively experienced learners retain more information when using a structured scaffold or framework for information, such as the IPASS mnemonic,4 for example.

To mitigate the effects of distraction on the transfer of information, especially in cases with high diagnostic uncertainty, cognitive load must be explicitly considered. A structured framework for communication about diagnostic uncertainty informed by cognitive load theory would be a novel innovation that would help not only graduate medical education but could also improve diagnostic accuracy.

Disclosures

The authors have no conflicts of interest to disclose

 

References

1. Chopra V, Harrod M, Winter S, et al. Focused Ethnography of Diagnosis in Academic Medical Centers. J Hosp Med. 2018;13(10):668-672. doi: 10.12788/jhm.2966. PubMed
2. Duong JA, Jensen TP, Morduchowicz, S, Mourad M, Harrison JD, Ranji SR. Exploring physician perspectives of residency holdover handoffs: a qualitative study to understand an increasingly important type of handoff. J Gen Intern Med. 2017;32(6):654-659. doi: 10.1007/s11606-017-4009-y PubMed
3. Young JQ, ten Cate O, O’Sullivan PS, Irby DM. Unpacking the complexity of patient handoffs through the lens of cognitive load theory. Teach Learn Med. 2016;28(1):88-96. doi: 10.1080/10401334.2015.1107491. PubMed
4. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. doi: 10.1056/NEJMc1414788. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(11)
Publications
Topics
Page Number
804
Sections
Article PDF
Article PDF
Related Articles

We read the article by Chopra et al. “Focused Ethnography of Diagnosis in Academic Medical Centers” with great interest.1 This ethnographic study provided valuable insights into possible interventions to encourage diagnostic thinking.

Duty hour regulations and the resulting increase in handoffs have shifted the social experience of diagnosis from one that occurs within teams to one that often occurs between teams during handoffs between providers.2 While the article highlighted barriers to diagnosis, including distractions and time pressure, it did not explicitly discuss cognitive load theory. Cognitive load theory is an educational framework that has been described by Young et al.3 to improve instructions in the handoff process. These investigators showed how progressively experienced learners retain more information when using a structured scaffold or framework for information, such as the IPASS mnemonic,4 for example.

To mitigate the effects of distraction on the transfer of information, especially in cases with high diagnostic uncertainty, cognitive load must be explicitly considered. A structured framework for communication about diagnostic uncertainty informed by cognitive load theory would be a novel innovation that would help not only graduate medical education but could also improve diagnostic accuracy.

Disclosures

The authors have no conflicts of interest to disclose

 

We read the article by Chopra et al. “Focused Ethnography of Diagnosis in Academic Medical Centers” with great interest.1 This ethnographic study provided valuable insights into possible interventions to encourage diagnostic thinking.

Duty hour regulations and the resulting increase in handoffs have shifted the social experience of diagnosis from one that occurs within teams to one that often occurs between teams during handoffs between providers.2 While the article highlighted barriers to diagnosis, including distractions and time pressure, it did not explicitly discuss cognitive load theory. Cognitive load theory is an educational framework that has been described by Young et al.3 to improve instructions in the handoff process. These investigators showed how progressively experienced learners retain more information when using a structured scaffold or framework for information, such as the IPASS mnemonic,4 for example.

To mitigate the effects of distraction on the transfer of information, especially in cases with high diagnostic uncertainty, cognitive load must be explicitly considered. A structured framework for communication about diagnostic uncertainty informed by cognitive load theory would be a novel innovation that would help not only graduate medical education but could also improve diagnostic accuracy.

Disclosures

The authors have no conflicts of interest to disclose

 

References

1. Chopra V, Harrod M, Winter S, et al. Focused Ethnography of Diagnosis in Academic Medical Centers. J Hosp Med. 2018;13(10):668-672. doi: 10.12788/jhm.2966. PubMed
2. Duong JA, Jensen TP, Morduchowicz, S, Mourad M, Harrison JD, Ranji SR. Exploring physician perspectives of residency holdover handoffs: a qualitative study to understand an increasingly important type of handoff. J Gen Intern Med. 2017;32(6):654-659. doi: 10.1007/s11606-017-4009-y PubMed
3. Young JQ, ten Cate O, O’Sullivan PS, Irby DM. Unpacking the complexity of patient handoffs through the lens of cognitive load theory. Teach Learn Med. 2016;28(1):88-96. doi: 10.1080/10401334.2015.1107491. PubMed
4. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. doi: 10.1056/NEJMc1414788. PubMed

References

1. Chopra V, Harrod M, Winter S, et al. Focused Ethnography of Diagnosis in Academic Medical Centers. J Hosp Med. 2018;13(10):668-672. doi: 10.12788/jhm.2966. PubMed
2. Duong JA, Jensen TP, Morduchowicz, S, Mourad M, Harrison JD, Ranji SR. Exploring physician perspectives of residency holdover handoffs: a qualitative study to understand an increasingly important type of handoff. J Gen Intern Med. 2017;32(6):654-659. doi: 10.1007/s11606-017-4009-y PubMed
3. Young JQ, ten Cate O, O’Sullivan PS, Irby DM. Unpacking the complexity of patient handoffs through the lens of cognitive load theory. Teach Learn Med. 2016;28(1):88-96. doi: 10.1080/10401334.2015.1107491. PubMed
4. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. doi: 10.1056/NEJMc1414788. PubMed

Issue
Journal of Hospital Medicine 13(11)
Issue
Journal of Hospital Medicine 13(11)
Page Number
804
Page Number
804
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Lekshmi Santhosh, MD; University of California-San Francisco, Department of Medicine, Divisions of Hospital Medicine & Pulmonary and Critical Care Medicine, 505 Parnassus Avenue, San Francisco, CA 94143; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial

Article Type
Changed
Fri, 12/14/2018 - 08:29
Display Headline
Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial

Patient experience has recently received heightened attention given evidence supporting an association between patient experience and quality of care,1 and the coupling of patient satisfaction to reimbursement rates for Medicare patients.2 Patient experience is often assessed through surveys of patient satisfaction, which correlates with patient perceptions of nurse and physician communication.3 Teaching hospitals introduce variables that may impact communication, including the involvement of multiple levels of care providers and competing patient care vs. educational priorities. Patients admitted to teaching services express decreased satisfaction with coordination and overall care compared with patients on nonteaching services.4

Clinical supervision of trainees on teaching services is primarily achieved through attending rounds (AR), where patients’ clinical presentations and management are discussed with an attending physician. Poor communication during AR may negatively affect the patient experience through inefficient care coordination among the inter-professional care team or through implementation of interventions without patients’ knowledge or input.5-11 Although patient engagement in rounds has been associated with higher patient satisfaction with rounds,12-19 AR and case presentations often occur at a distance from the patient’s bedside.20,21 Furthermore, AR vary in the time allotted per patient and the extent of participation of nurses and other allied health professionals. Standardized bedside rounding processes have been shown to improve efficiency, decrease daily resident work hours,22 and improve nurse-physician teamwork.23

Despite these benefits, recent prospective studies of bedside AR interventions have not improved patient satisfaction with rounds. One involved the implementation of interprofessional patient-centered bedside rounds on a nonteaching service,24 while the other evaluated the impact of integrating athletic principles into multidisciplinary work rounds.25 Work at our institution had sought to develop AR practice recommendations to foster an optimal patient experience, while maintaining provider workflow efficiency, facilitating interdisciplinary communication, and advancing trainee education.26 Using these AR recommendations, we conducted a prospective randomized controlled trial to evaluate the impact of implementing a standardized bedside AR model on patient satisfaction with rounds. We also assessed attending physician and trainee satisfaction with rounds, and perceived and actual AR duration.

METHODS

Setting and Participants

This trial was conducted on the internal medicine teaching service of the University of California San Francisco Medical Center from September 3, 2013 to November 27, 2013. The service is comprised of 8 teams, with a total average daily census of 80 to 90 patients. Teams are comprised of an attending physician, a senior resident (in the second or third year of residency training), 2 interns, and a third- and/or fourth-year medical student.

 

 

This trial, which was approved by the University of California, San Francisco Committee on Human Research (UCSF CHR) and was registered with ClinicalTrials.gov (NCT01931553), was classified under Quality Improvement and did not require informed consent of patients or providers.

Intervention Description

We conducted a cluster randomized trial to evaluate the impact of a bundled set of 5 AR practice recommendations, adapted from published work,26 on patient experience, as well as on attending and trainee satisfaction: 1) huddling to establish the rounding schedule and priorities; 2) conducting bedside rounds; 3) integrating bedside nurses; 4) completing real-time order entry using bedside computers; 5) updating the whiteboard in each patient’s room with care plan information.

At the beginning of each month, study investigators (Nader Najafi and Bradley Monash) led a 1.5-hour workshop to train attending physicians and trainees allocated to the intervention arm on the recommended AR practices. Participants also received informational handouts to be referenced during AR. Attending physicians and trainees randomized to the control arm continued usual rounding practices. Control teams were notified that there would be observers on rounds but were not informed of the study aims.

Randomization and Team Assignments

The medicine service was divided into 2 arms, each comprised of 4 teams. Using a coin flip, Cluster 1 (Teams A, B, C and D) was randomized to the intervention, and Cluster 2 (Teams E, F, G and H) was randomized to the control. This design was pragmatically chosen to ensure that 1 team from each arm would admit patients daily. Allocation concealment of attending physicians and trainees was not possible given the nature of the intervention. Patients were blinded to study arm allocation.

MEASURES AND OUTCOMES

Adherence to Practice Recommendations

Thirty premedical students served as volunteer AR auditors. Each auditor received orientation and training in data collection techniques during a single 2-hour workshop. The auditors, blinded to study arm allocation, independently observed morning AR during weekdays and recorded the completion of the following elements as a dichotomous (yes/no) outcome: pre-rounds huddle, participation of nurse in AR, real-time order entry, and whiteboard use. They recorded the duration of AR per day for each team (minutes) and the rounding model for each patient rounding encounter during AR (bedside, hallway, or card flip).23 Bedside rounds were defined as presentation and discussion of the patient care plan in the presence of the patient. Hallway rounds were defined as presentation and discussion of the patient care plan partially outside the patient’s room and partially in the presence of the patient. Card-flip rounds were defined as presentation and discussion of the patient care plan entirely outside of the patient’s room without the team seeing the patient together. Two auditors simultaneously observed a random subset of patient-rounding encounters to evaluate inter-rater reliability, and the concordance between auditor observations was good (Pearson correlation = 0.66).27

Patient-Related Outcomes

The primary outcome was patient satisfaction with AR, assessed using a survey adapted from published work.12,14,28,29 Patients were approached to complete the questionnaire after they had experienced at least 1 AR. Patients were excluded if they were non-English-speaking, unavailable (eg, off the unit for testing or treatment), in isolation, or had impaired mental status. For patients admitted multiple times during the study period, only the first questionnaire was used. Survey questions included patient involvement in decision-making, quality of communication between patient and medicine team, and the perception that the medicine team cared about the patient. Patients were asked to state their level of agreement with each item on a 5-point Likert scale. We obtained data on patient demographics from administrative datasets.

Healthcare Provider Outcomes

Attending physicians and trainees on service for at least 7 consecutive days were sent an electronic survey, adapted from published work.25,30 Questions assessed satisfaction with AR, perceived value of bedside rounds, and extent of patient and nursing involvement.Level of agreement with each item was captured on a continuous scale; 0 = strongly disagree to 100 = strongly agree, or from 0 (far too little) to 100 (far too much), with 50 equating to “about right.” Attending physicians and trainees were also asked to estimate the average duration of AR (in minutes).

Statistical Analyses

Analyses were blinded to study arm allocation and followed intention-to-treat principles. One attending physician crossed over from intervention to control arm; patient surveys associated with this attending (n = 4) were excluded to avoid contamination. No trainees crossed over.

Demographic and clinical characteristics of patients who completed the survey are reported (Appendix). To compare patient satisfaction scores, we used a random-effects regression model to account for correlation among responses within teams within randomized clusters, defining teams by attending physician. As this correlation was negligible and not statistically significant, we did not adjust ordinary linear regression models for clustering. Given observed differences in patient characteristics, we adjusted for a number of covariates (eg, age, gender, insurance payer, race, marital status, trial group arm).

We conducted simple linear regression for attending and trainee satisfaction comparisons between arms, adjusting only for trainee type (eg, resident, intern, and medical student).

We compared the frequency with which intervention and control teams adhered to the 5 recommended AR practices using chi-square tests. We used independent Student’s t tests to compare total duration of AR by teams within each arm, as well as mean time spent per patient.

This trial had a fixed number of arms (n = 2), each of fixed size (n = 600), based on the average monthly inpatient census on the medicine service. This fixed sample size, with 80% power and α = 0.05, will be able to detect a 0.16 difference in patient satisfaction scores between groups.

All analyses were conducted using SAS® v 9.4 (SAS Institute, Inc., Cary, NC).

 

 

RESULTS

We observed 241 AR involving 1855 patient rounding encounters in the intervention arm and 264 AR involving 1903 patient rounding encounters in the control arm (response rates shown in Figure 1).

Study flow diagram
Figure 1
Intervention teams adopted each of the recommended AR practices at significantly higher rates compared to control teams, with the largest difference occurring for AR occurring at the bedside (52.9% vs. 5.4%; Figure 2).
Prevalence of recommended rounding practices
Figure 2
Teams in the intervention arm demonstrated highest adherence to the pre-rounds huddle (78.1%) and lowest adherence to whiteboard use (29.9%).

Patient Satisfaction and Clinical Outcomes

Five hundred ninety-five patients were allocated to the intervention arm and 605 were allocated to the control arm (Figure 1). Mean age, gender, race, marital status, primary language, and insurance provider did not differ between intervention and control arms (Table 1).

Hospitalized Patient Characteristics by Intervention and Control Arms
Table 1
One hundred forty-six (24.5%) and 141 (23.3%) patients completed surveys in the intervention and control arms, respectively. Patients who completed surveys in each arm were younger and more likely to have commercial insurance (Appendix).

Patients in the intervention arm reported significantly higher satisfaction with AR and felt more cared for by their medicine team (Table 2).
Patient, Attending, and Trainee Satisfaction by Randomized Arm
Table 2
Patient-perceived quality of communication and shared decision-making did not differ between arms.

Actual and Perceived Duration of Attending Rounds

The intervention shortened the total duration of AR by 8 minutes on average (143 vs. 151 minutes, P = 0.052) and the time spent per patient by 4 minutes on average (19 vs. 23 minutes, P < 0.001). Despite this, trainees in the intervention arm perceived AR to last longer (mean estimated time: 167 min vs. 152 min, P < 0.001).

Healthcare Provider Outcomes

We observed 79 attending physicians and trainees in the intervention arm and 78 in the control arm, with survey response rates shown in Figure 1. Attending physicians in the intervention and the control arms reported high levels of satisfaction with the quality of AR (Table 2). Attending physicians in the intervention arm were more likely to report an appropriate level of patient involvement and nurse involvement.

Although trainees in the intervention and control arms reported high levels of satisfaction with the quality of AR, trainees in the intervention arm reported lower satisfaction with AR compared with control arm trainees (Table 2). Trainees in the intervention arm reported that AR involved less autonomy, efficiency, and teaching. Trainees in the intervention arm also scored patient involvement more towards the “far too much” end of the scale compared with “about right” in the control arm. However, trainees in the intervention arm perceived nurse involvement closer to “about right,” as opposed to “far too little” in the control arm.

CONCLUSION/DISCUSSION

Training internal medicine teams to adhere to 5 recommended AR practices increased patient satisfaction with AR and the perception that patients were more cared for by their medicine team. Despite the intervention potentially shortening the duration of AR, attending physicians and trainees perceived AR to last longer, and trainee satisfaction with AR decreased.

Teams in the intervention arm adhered to all recommended rounding practices at higher rates than the control teams. Although intervention teams rounded at the bedside 53% of the time, they were encouraged to bedside round only on patients who desired to participate in rounds, were not altered, and for whom the clinical discussion was not too sensitive to occur at the bedside. Of the recommended rounding behaviors, the lowest adherence was seen with whiteboard use.

A major component of the intervention was to move the clinical presentation to the patient’s bedside. Most patients prefer being included in rounds and partaking in trainee education.12-19,28,29,31-33 Patients may also perceive that more time is spent with them during bedside case presentations,14,28 and exposure to providers conferring on their care may enhance patient confidence in the care being delivered.12 Although a recent study of patient-centered bedside rounding on a nonteaching service did not result in increased patient satisfaction,24 teaching services may offer more opportunities for improvement in care coordination and communication.4

Other aspects of the intervention may have contributed to increased patient satisfaction with AR. The pre-rounds huddle may have helped teams prioritize which patients required more time or would benefit most from bedside rounds. The involvement of nurses in AR may have bolstered communication and team dynamics, enhancing the patient’s perception of interprofessional collaboration. Real-time order entry might have led to more efficient implementation of the care plan, and whiteboard use may have helped to keep patients abreast of the care plan.

Patients in the intervention arm felt more cared for by their medicine teams but did not report improvements in communication or in shared decision-making. Prior work highlights that limited patient engagement, activation, and shared decision-making may occur during AR.24,34 Patient-physician communication during AR is challenged by time pressures and competing priorities, including the “need” for trainees to demonstrate their medical knowledge and clinical skills. Efforts that encourage bedside rounding should include communication training with respect to patient engagement and shared decision-making.

Attending physicians reported positive attitudes toward bedside rounding, consistent with prior studies.13,21,31 However, trainees in the intervention arm expressed decreased satisfaction with AR, estimating that AR took longer and reporting too much patient involvement. Prior studies reflect similar bedside-rounding concerns, including perceived workflow inefficiencies, infringement on teaching opportunities, and time constraints.12,20,35 Trainees are under intense time pressures to complete their work, attend educational conferences, and leave the hospital to attend afternoon clinic or to comply with duty-hour restrictions. Trainees value succinctness,12,35,36 so the perception that intervention AR lasted longer likely contributed to trainee dissatisfaction.

Reduced trainee satisfaction with intervention AR may have also stemmed from the perception of decreased autonomy and less teaching, both valued by trainees.20,35,36 The intervention itself reduced trainee autonomy because usual practice at our hospital involves residents deciding where and how to round. Attending physician presence at the bedside during rounds may have further infringed on trainee autonomy if the patient looked to the attending for answers, or if the attending was seen as the AR leader. Attending physicians may mitigate the risk of compromising trainee autonomy by allowing the trainee to speak first, ensuring the trainee is positioned closer to, and at eye level with, the patient, and redirecting patient questions to the trainee as appropriate. Optimizing trainee experience with bedside AR requires preparation and training of attending physicians, who may feel inadequately prepared to lead bedside rounds and conduct bedside teaching.37 Faculty must learn how to preserve team efficiency, create a safe, nonpunitive bedside environment that fosters the trainee-patient relationship, and ensure rounds remain educational.36,38,39

The intervention reduced the average time spent on AR and time spent per patient. Studies examining the relationship between bedside rounding and duration of rounds have yielded mixed results: some have demonstrated no effect of bedside rounds on rounding time,28,40 while others report longer rounding times.37 The pre-rounds huddle and real-time order writing may have enhanced workflow efficiency.

Our study has several limitations. These results reflect the experience of a single large academic medical center and may not be generalizable to other settings. Although overall patient response to the survey was low and may not be representative of the entire patient population, response rates in the intervention and control arms were equivalent. Non-English speaking patients may have preferences that were not reflected in our survey results, and we did not otherwise quantify individual reasons for survey noncompletion. The presence of auditors on AR may have introduced observer bias. There may have been crossover effect; however, observed prevalence of individual practices remained low in the control arm. The 1.5-hour workshop may have inadequately equipped trainees with the complex skills required to lead and participate in bedside rounding, and more training, experience, and feedback may have yielded different results. For instance, residents with more exposure to bedside rounding express greater appreciation of its role in education and patient care.20 While adherence to some of the recommended practices remained low, we did not employ a full range of change-management techniques. Instead, we opted for a “low intensity” intervention (eg, single workshop, handouts) that relied on voluntary adoption by medicine teams and that we hoped other institutions could reproduce. Finally, we did not assess the relative impact of individual rounding behaviors on the measured outcomes.

In conclusion, training medicine teams to adhere to a standardized bedside AR model increased patient satisfaction with rounds. Concomitant trainee dissatisfaction may require further experience and training of attending physicians and trainees to ensure successful adoption.

Acknowledgements

 

 

We would like to thank all patients, providers, and trainees who participated in this study. We would also like to acknowledge the following volunteer auditors who observed teams daily: Arianna Abundo, Elahhe Afkhamnejad, Yolanda Banuelos, Laila Fozoun, Soe Yupar Khin, Tam Thien Le, Wing Sum Li, Yaqiao Li, Mengyao Liu, Tzyy-Harn Lo, Shynh-Herng Lo, David Lowe, Danoush Paborji, Sa Nan Park, Urmila Powale, Redha Fouad Qabazard, Monique Quiroz, John-Luke Marcelo Rivera, Manfred Roy Luna Salvador, Tobias Gowen Squier-Roper, Flora Yan Ting, Francesca Natasha T. Tizon, Emily Claire Trautner, Stephen Weiner, Alice Wilson, Kimberly Woo, Bingling J Wu, Johnny Wu, Brenda Yee. Statistical expertise was provided by Joan Hilton from the UCSF Clinical and Translational Science Institute (CTSI), which is supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR000004. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Thanks also to Oralia Schatzman, Andrea Mazzini, and Erika Huie for their administrative support, and John Hillman for data-related support. Special thanks to Kirsten Kangelaris and Andrew Auerbach for their valuable feedback throughout, and to Maria Novelero and Robert Wachter for their divisional support of this project. 

Disclosure

The authors report no financial conflicts of interest.

Files
References

1. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):1-18. PubMed
2. Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) Fact Sheet. August 2013. Centers for Medicare and Medicaid Services (CMS). Baltimore, MD.http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed December 1, 2015.
3. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17:41-48. PubMed

4. Wray CM, Flores A, Padula WV, Prochaska MT, Meltzer DO, Arora VM. Measuring patient experiences on hospitalist and teaching services: Patient responses to a 30-day postdischarge questionnaire. J Hosp Med. 2016;11(2):99-104. PubMed
5. Bharwani AM, Harris GC, Southwick FS. Perspective: A business school view of medical interprofessional rounds: transforming rounding groups into rounding teams. Acad Med. 2012;87(12):1768-1771. PubMed
6. Chand DV. Observational study using the tools of lean six sigma to improve the efficiency of the resident rounding process. J Grad Med Educ. 2011;3(2):144-150. PubMed

7. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
8. Weber H, Stöckli M, Nübling M, Langewitz WA. Communication during ward rounds in internal medicine. An analysis of patient-nurse-physician interactions using RIAS. Patient Educ Couns. 2007;67(3):343-348. PubMed
9. McMahon GT, Katz JT, Thorndike ME, Levy BD, Loscalzo J. Evaluation of a redesign initiative in an internal-medicine residency. N Engl J Med. 2010;362(14):1304-1311. PubMed

10. Amoss J. Attending rounds: where do we go from here?: comment on “Attending rounds in the current era”. JAMA Intern Med. 2013;173(12):1089-1090. PubMed
11. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(suppl 8):AS4-A12. PubMed
12. Wang-Cheng RM, Barnas GP, Sigmann P, Riendl PA, Young MJ. Bedside case presentations: why patients like them but learners don’t. J Gen Intern Med. 1989;4(4):284-287. PubMed

13. Chauke, HL, Pattinson RC. Ward rounds—bedside or conference room? S Afr Med J. 2006;96(5):398-400. PubMed
14. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients’ perceptions of their medical care. N Engl J Med. 1997;336(16):336, 1150-1155. PubMed
15. Simons RJ, Baily RG, Zelis R, Zwillich CW. The physiologic and psychological effects of the bedside presentation. N Engl J Med. 1989;321(18):1273-1275. PubMed

16. Wise TN, Feldheim D, Mann LS, Boyle E, Rustgi VK. Patients’ reactions to house staff work rounds. Psychosomatics. 1985;26(8):669-672. PubMed
17. Linfors EW, Neelon FA. Sounding Boards. The case of bedside rounds. N Engl J Med. 1980;303(21):1230-1233. PubMed
18. Nair BR, Coughlan JL, Hensley MJ. Student and patient perspectives on bedside teaching. Med Educ. 1997;31(5):341-346. PubMed

19. Romano J. Patients’ attitudes and behavior in ward round teaching. JAMA. 1941;117(9):664-667.
20. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105-110. PubMed
21. Shoeb M, Khanna R, Fang M, et al. Internal medicine rounding practices and the Accreditation Council for Graduate Medical Education core competencies. J Hosp Med. 2014;9(4):239-243. PubMed

22. Calderon AS, Blackmore CC, Williams BL, et al. Transforming ward rounds through rounding-in-flow. J Grad Med Educ. 2014;6(4):750-755. PubMed
23. Henkin S, Chon TY, Christopherson ML, Halvorsen AJ, Worden LM, Ratelle JT. Improving nurse-physician teamwork through interprofessional bedside rounding. J Multidiscip Healthc. 2016;9:201-205. PubMed
24. O’Leary KJ, Killarney A, Hansen LO, et al. Effect of patient-centred bedside rounds on hospitalised patients’ decision control, activation and satisfaction with care. BMJ Qual Saf. 2016;25:921-928. PubMed

25. Southwick F, Lewis M, Treloar D, et al. Applying athletic principles to medical rounds to improve teaching and patient care. Acad Med. 2014;89(7):1018-1023. PubMed
26. Najafi N, Monash B, Mourad M, et al. Improving attending rounds: Qualitative reflections from multidisciplinary providers. Hosp Pract (1995). 2015;43(3):186-190. PubMed
27. Altman DG. Practical Statistics For Medical Research. Boca Raton, FL: Chapman & Hall/CRC; 2006.

28. Gonzalo JD, Chuang CH, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792-798. PubMed
29. Fletcher KE, Rankey DS, Stern DT. Bedside interactions from the other side of the bedrail. J Gen Intern Med. 2005;20(1):58-61. PubMed

30. Gatorounds: Applying Championship Athletic Principles to Healthcare. University of Florida Health. http://gatorounds.med.ufl.edu/surveys/. Accessed March 1, 2013.
31. Gonzalo JD, Heist BS, Duffy BL, et al. The value of bedside rounds: a multicenter qualitative study. Teach Learn Med. 2013;25(4):326-333. PubMed
32. Rogers HD, Carline JD, Paauw DS. Examination room presentations in general internal medicine clinic: patients’ and students’ perceptions. Acad Med. 2003;78(9):945-949. PubMed

 

 

33. Fletcher KE, Furney SL, Stern DT. Patients speak: what’s really important about bedside interactions with physician teams. Teach Learn Med. 2007;19(2):120-127. PubMed
34. Satterfield JM, Bereknyei S, Hilton JF, et al. The prevalence of social and behavioral topics and related educational opportunities during attending rounds. Acad Med. 2014; 89(11):1548-1557. PubMed
35. Kroenke K, Simmons JO, Copley JB, Smith C. Attending rounds: a survey of physician attitudes. J Gen Intern Med. 1990;5(3):229-233. PubMed

36. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
37. Crumlish CM, Yialamas MA, McMahon GT. Quantification of bedside teaching by an academic hospitalist group. J Hosp Med. 2009;4(5):304-307. PubMed
38. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient-centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):1040-1047. PubMed

39. Roy B, Castiglioni A, Kraemer RR, et al. Using cognitive mapping to define key domains for successful attending rounds. J Gen Intern Med. 2012;27(11):1492-1498. PubMed
40. Bhansali P, Birch S, Campbell JK, et al. A time-motion study of inpatient rounds using a family-centered rounds model. Hosp Pediatr. 2013;3(1):31-38. PubMed

Article PDF
Issue
Journal of Hospital Medicine - 12(3)
Publications
Topics
Page Number
143-149
Sections
Files
Files
Article PDF
Article PDF

Patient experience has recently received heightened attention given evidence supporting an association between patient experience and quality of care,1 and the coupling of patient satisfaction to reimbursement rates for Medicare patients.2 Patient experience is often assessed through surveys of patient satisfaction, which correlates with patient perceptions of nurse and physician communication.3 Teaching hospitals introduce variables that may impact communication, including the involvement of multiple levels of care providers and competing patient care vs. educational priorities. Patients admitted to teaching services express decreased satisfaction with coordination and overall care compared with patients on nonteaching services.4

Clinical supervision of trainees on teaching services is primarily achieved through attending rounds (AR), where patients’ clinical presentations and management are discussed with an attending physician. Poor communication during AR may negatively affect the patient experience through inefficient care coordination among the inter-professional care team or through implementation of interventions without patients’ knowledge or input.5-11 Although patient engagement in rounds has been associated with higher patient satisfaction with rounds,12-19 AR and case presentations often occur at a distance from the patient’s bedside.20,21 Furthermore, AR vary in the time allotted per patient and the extent of participation of nurses and other allied health professionals. Standardized bedside rounding processes have been shown to improve efficiency, decrease daily resident work hours,22 and improve nurse-physician teamwork.23

Despite these benefits, recent prospective studies of bedside AR interventions have not improved patient satisfaction with rounds. One involved the implementation of interprofessional patient-centered bedside rounds on a nonteaching service,24 while the other evaluated the impact of integrating athletic principles into multidisciplinary work rounds.25 Work at our institution had sought to develop AR practice recommendations to foster an optimal patient experience, while maintaining provider workflow efficiency, facilitating interdisciplinary communication, and advancing trainee education.26 Using these AR recommendations, we conducted a prospective randomized controlled trial to evaluate the impact of implementing a standardized bedside AR model on patient satisfaction with rounds. We also assessed attending physician and trainee satisfaction with rounds, and perceived and actual AR duration.

METHODS

Setting and Participants

This trial was conducted on the internal medicine teaching service of the University of California San Francisco Medical Center from September 3, 2013 to November 27, 2013. The service is comprised of 8 teams, with a total average daily census of 80 to 90 patients. Teams are comprised of an attending physician, a senior resident (in the second or third year of residency training), 2 interns, and a third- and/or fourth-year medical student.

 

 

This trial, which was approved by the University of California, San Francisco Committee on Human Research (UCSF CHR) and was registered with ClinicalTrials.gov (NCT01931553), was classified under Quality Improvement and did not require informed consent of patients or providers.

Intervention Description

We conducted a cluster randomized trial to evaluate the impact of a bundled set of 5 AR practice recommendations, adapted from published work,26 on patient experience, as well as on attending and trainee satisfaction: 1) huddling to establish the rounding schedule and priorities; 2) conducting bedside rounds; 3) integrating bedside nurses; 4) completing real-time order entry using bedside computers; 5) updating the whiteboard in each patient’s room with care plan information.

At the beginning of each month, study investigators (Nader Najafi and Bradley Monash) led a 1.5-hour workshop to train attending physicians and trainees allocated to the intervention arm on the recommended AR practices. Participants also received informational handouts to be referenced during AR. Attending physicians and trainees randomized to the control arm continued usual rounding practices. Control teams were notified that there would be observers on rounds but were not informed of the study aims.

Randomization and Team Assignments

The medicine service was divided into 2 arms, each comprised of 4 teams. Using a coin flip, Cluster 1 (Teams A, B, C and D) was randomized to the intervention, and Cluster 2 (Teams E, F, G and H) was randomized to the control. This design was pragmatically chosen to ensure that 1 team from each arm would admit patients daily. Allocation concealment of attending physicians and trainees was not possible given the nature of the intervention. Patients were blinded to study arm allocation.

MEASURES AND OUTCOMES

Adherence to Practice Recommendations

Thirty premedical students served as volunteer AR auditors. Each auditor received orientation and training in data collection techniques during a single 2-hour workshop. The auditors, blinded to study arm allocation, independently observed morning AR during weekdays and recorded the completion of the following elements as a dichotomous (yes/no) outcome: pre-rounds huddle, participation of nurse in AR, real-time order entry, and whiteboard use. They recorded the duration of AR per day for each team (minutes) and the rounding model for each patient rounding encounter during AR (bedside, hallway, or card flip).23 Bedside rounds were defined as presentation and discussion of the patient care plan in the presence of the patient. Hallway rounds were defined as presentation and discussion of the patient care plan partially outside the patient’s room and partially in the presence of the patient. Card-flip rounds were defined as presentation and discussion of the patient care plan entirely outside of the patient’s room without the team seeing the patient together. Two auditors simultaneously observed a random subset of patient-rounding encounters to evaluate inter-rater reliability, and the concordance between auditor observations was good (Pearson correlation = 0.66).27

Patient-Related Outcomes

The primary outcome was patient satisfaction with AR, assessed using a survey adapted from published work.12,14,28,29 Patients were approached to complete the questionnaire after they had experienced at least 1 AR. Patients were excluded if they were non-English-speaking, unavailable (eg, off the unit for testing or treatment), in isolation, or had impaired mental status. For patients admitted multiple times during the study period, only the first questionnaire was used. Survey questions included patient involvement in decision-making, quality of communication between patient and medicine team, and the perception that the medicine team cared about the patient. Patients were asked to state their level of agreement with each item on a 5-point Likert scale. We obtained data on patient demographics from administrative datasets.

Healthcare Provider Outcomes

Attending physicians and trainees on service for at least 7 consecutive days were sent an electronic survey, adapted from published work.25,30 Questions assessed satisfaction with AR, perceived value of bedside rounds, and extent of patient and nursing involvement.Level of agreement with each item was captured on a continuous scale; 0 = strongly disagree to 100 = strongly agree, or from 0 (far too little) to 100 (far too much), with 50 equating to “about right.” Attending physicians and trainees were also asked to estimate the average duration of AR (in minutes).

Statistical Analyses

Analyses were blinded to study arm allocation and followed intention-to-treat principles. One attending physician crossed over from intervention to control arm; patient surveys associated with this attending (n = 4) were excluded to avoid contamination. No trainees crossed over.

Demographic and clinical characteristics of patients who completed the survey are reported (Appendix). To compare patient satisfaction scores, we used a random-effects regression model to account for correlation among responses within teams within randomized clusters, defining teams by attending physician. As this correlation was negligible and not statistically significant, we did not adjust ordinary linear regression models for clustering. Given observed differences in patient characteristics, we adjusted for a number of covariates (eg, age, gender, insurance payer, race, marital status, trial group arm).

We conducted simple linear regression for attending and trainee satisfaction comparisons between arms, adjusting only for trainee type (eg, resident, intern, and medical student).

We compared the frequency with which intervention and control teams adhered to the 5 recommended AR practices using chi-square tests. We used independent Student’s t tests to compare total duration of AR by teams within each arm, as well as mean time spent per patient.

This trial had a fixed number of arms (n = 2), each of fixed size (n = 600), based on the average monthly inpatient census on the medicine service. This fixed sample size, with 80% power and α = 0.05, will be able to detect a 0.16 difference in patient satisfaction scores between groups.

All analyses were conducted using SAS® v 9.4 (SAS Institute, Inc., Cary, NC).

 

 

RESULTS

We observed 241 AR involving 1855 patient rounding encounters in the intervention arm and 264 AR involving 1903 patient rounding encounters in the control arm (response rates shown in Figure 1).

Study flow diagram
Figure 1
Intervention teams adopted each of the recommended AR practices at significantly higher rates compared to control teams, with the largest difference occurring for AR occurring at the bedside (52.9% vs. 5.4%; Figure 2).
Prevalence of recommended rounding practices
Figure 2
Teams in the intervention arm demonstrated highest adherence to the pre-rounds huddle (78.1%) and lowest adherence to whiteboard use (29.9%).

Patient Satisfaction and Clinical Outcomes

Five hundred ninety-five patients were allocated to the intervention arm and 605 were allocated to the control arm (Figure 1). Mean age, gender, race, marital status, primary language, and insurance provider did not differ between intervention and control arms (Table 1).

Hospitalized Patient Characteristics by Intervention and Control Arms
Table 1
One hundred forty-six (24.5%) and 141 (23.3%) patients completed surveys in the intervention and control arms, respectively. Patients who completed surveys in each arm were younger and more likely to have commercial insurance (Appendix).

Patients in the intervention arm reported significantly higher satisfaction with AR and felt more cared for by their medicine team (Table 2).
Patient, Attending, and Trainee Satisfaction by Randomized Arm
Table 2
Patient-perceived quality of communication and shared decision-making did not differ between arms.

Actual and Perceived Duration of Attending Rounds

The intervention shortened the total duration of AR by 8 minutes on average (143 vs. 151 minutes, P = 0.052) and the time spent per patient by 4 minutes on average (19 vs. 23 minutes, P < 0.001). Despite this, trainees in the intervention arm perceived AR to last longer (mean estimated time: 167 min vs. 152 min, P < 0.001).

Healthcare Provider Outcomes

We observed 79 attending physicians and trainees in the intervention arm and 78 in the control arm, with survey response rates shown in Figure 1. Attending physicians in the intervention and the control arms reported high levels of satisfaction with the quality of AR (Table 2). Attending physicians in the intervention arm were more likely to report an appropriate level of patient involvement and nurse involvement.

Although trainees in the intervention and control arms reported high levels of satisfaction with the quality of AR, trainees in the intervention arm reported lower satisfaction with AR compared with control arm trainees (Table 2). Trainees in the intervention arm reported that AR involved less autonomy, efficiency, and teaching. Trainees in the intervention arm also scored patient involvement more towards the “far too much” end of the scale compared with “about right” in the control arm. However, trainees in the intervention arm perceived nurse involvement closer to “about right,” as opposed to “far too little” in the control arm.

CONCLUSION/DISCUSSION

Training internal medicine teams to adhere to 5 recommended AR practices increased patient satisfaction with AR and the perception that patients were more cared for by their medicine team. Despite the intervention potentially shortening the duration of AR, attending physicians and trainees perceived AR to last longer, and trainee satisfaction with AR decreased.

Teams in the intervention arm adhered to all recommended rounding practices at higher rates than the control teams. Although intervention teams rounded at the bedside 53% of the time, they were encouraged to bedside round only on patients who desired to participate in rounds, were not altered, and for whom the clinical discussion was not too sensitive to occur at the bedside. Of the recommended rounding behaviors, the lowest adherence was seen with whiteboard use.

A major component of the intervention was to move the clinical presentation to the patient’s bedside. Most patients prefer being included in rounds and partaking in trainee education.12-19,28,29,31-33 Patients may also perceive that more time is spent with them during bedside case presentations,14,28 and exposure to providers conferring on their care may enhance patient confidence in the care being delivered.12 Although a recent study of patient-centered bedside rounding on a nonteaching service did not result in increased patient satisfaction,24 teaching services may offer more opportunities for improvement in care coordination and communication.4

Other aspects of the intervention may have contributed to increased patient satisfaction with AR. The pre-rounds huddle may have helped teams prioritize which patients required more time or would benefit most from bedside rounds. The involvement of nurses in AR may have bolstered communication and team dynamics, enhancing the patient’s perception of interprofessional collaboration. Real-time order entry might have led to more efficient implementation of the care plan, and whiteboard use may have helped to keep patients abreast of the care plan.

Patients in the intervention arm felt more cared for by their medicine teams but did not report improvements in communication or in shared decision-making. Prior work highlights that limited patient engagement, activation, and shared decision-making may occur during AR.24,34 Patient-physician communication during AR is challenged by time pressures and competing priorities, including the “need” for trainees to demonstrate their medical knowledge and clinical skills. Efforts that encourage bedside rounding should include communication training with respect to patient engagement and shared decision-making.

Attending physicians reported positive attitudes toward bedside rounding, consistent with prior studies.13,21,31 However, trainees in the intervention arm expressed decreased satisfaction with AR, estimating that AR took longer and reporting too much patient involvement. Prior studies reflect similar bedside-rounding concerns, including perceived workflow inefficiencies, infringement on teaching opportunities, and time constraints.12,20,35 Trainees are under intense time pressures to complete their work, attend educational conferences, and leave the hospital to attend afternoon clinic or to comply with duty-hour restrictions. Trainees value succinctness,12,35,36 so the perception that intervention AR lasted longer likely contributed to trainee dissatisfaction.

Reduced trainee satisfaction with intervention AR may have also stemmed from the perception of decreased autonomy and less teaching, both valued by trainees.20,35,36 The intervention itself reduced trainee autonomy because usual practice at our hospital involves residents deciding where and how to round. Attending physician presence at the bedside during rounds may have further infringed on trainee autonomy if the patient looked to the attending for answers, or if the attending was seen as the AR leader. Attending physicians may mitigate the risk of compromising trainee autonomy by allowing the trainee to speak first, ensuring the trainee is positioned closer to, and at eye level with, the patient, and redirecting patient questions to the trainee as appropriate. Optimizing trainee experience with bedside AR requires preparation and training of attending physicians, who may feel inadequately prepared to lead bedside rounds and conduct bedside teaching.37 Faculty must learn how to preserve team efficiency, create a safe, nonpunitive bedside environment that fosters the trainee-patient relationship, and ensure rounds remain educational.36,38,39

The intervention reduced the average time spent on AR and time spent per patient. Studies examining the relationship between bedside rounding and duration of rounds have yielded mixed results: some have demonstrated no effect of bedside rounds on rounding time,28,40 while others report longer rounding times.37 The pre-rounds huddle and real-time order writing may have enhanced workflow efficiency.

Our study has several limitations. These results reflect the experience of a single large academic medical center and may not be generalizable to other settings. Although overall patient response to the survey was low and may not be representative of the entire patient population, response rates in the intervention and control arms were equivalent. Non-English speaking patients may have preferences that were not reflected in our survey results, and we did not otherwise quantify individual reasons for survey noncompletion. The presence of auditors on AR may have introduced observer bias. There may have been crossover effect; however, observed prevalence of individual practices remained low in the control arm. The 1.5-hour workshop may have inadequately equipped trainees with the complex skills required to lead and participate in bedside rounding, and more training, experience, and feedback may have yielded different results. For instance, residents with more exposure to bedside rounding express greater appreciation of its role in education and patient care.20 While adherence to some of the recommended practices remained low, we did not employ a full range of change-management techniques. Instead, we opted for a “low intensity” intervention (eg, single workshop, handouts) that relied on voluntary adoption by medicine teams and that we hoped other institutions could reproduce. Finally, we did not assess the relative impact of individual rounding behaviors on the measured outcomes.

In conclusion, training medicine teams to adhere to a standardized bedside AR model increased patient satisfaction with rounds. Concomitant trainee dissatisfaction may require further experience and training of attending physicians and trainees to ensure successful adoption.

Acknowledgements

 

 

We would like to thank all patients, providers, and trainees who participated in this study. We would also like to acknowledge the following volunteer auditors who observed teams daily: Arianna Abundo, Elahhe Afkhamnejad, Yolanda Banuelos, Laila Fozoun, Soe Yupar Khin, Tam Thien Le, Wing Sum Li, Yaqiao Li, Mengyao Liu, Tzyy-Harn Lo, Shynh-Herng Lo, David Lowe, Danoush Paborji, Sa Nan Park, Urmila Powale, Redha Fouad Qabazard, Monique Quiroz, John-Luke Marcelo Rivera, Manfred Roy Luna Salvador, Tobias Gowen Squier-Roper, Flora Yan Ting, Francesca Natasha T. Tizon, Emily Claire Trautner, Stephen Weiner, Alice Wilson, Kimberly Woo, Bingling J Wu, Johnny Wu, Brenda Yee. Statistical expertise was provided by Joan Hilton from the UCSF Clinical and Translational Science Institute (CTSI), which is supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR000004. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Thanks also to Oralia Schatzman, Andrea Mazzini, and Erika Huie for their administrative support, and John Hillman for data-related support. Special thanks to Kirsten Kangelaris and Andrew Auerbach for their valuable feedback throughout, and to Maria Novelero and Robert Wachter for their divisional support of this project. 

Disclosure

The authors report no financial conflicts of interest.

Patient experience has recently received heightened attention given evidence supporting an association between patient experience and quality of care,1 and the coupling of patient satisfaction to reimbursement rates for Medicare patients.2 Patient experience is often assessed through surveys of patient satisfaction, which correlates with patient perceptions of nurse and physician communication.3 Teaching hospitals introduce variables that may impact communication, including the involvement of multiple levels of care providers and competing patient care vs. educational priorities. Patients admitted to teaching services express decreased satisfaction with coordination and overall care compared with patients on nonteaching services.4

Clinical supervision of trainees on teaching services is primarily achieved through attending rounds (AR), where patients’ clinical presentations and management are discussed with an attending physician. Poor communication during AR may negatively affect the patient experience through inefficient care coordination among the inter-professional care team or through implementation of interventions without patients’ knowledge or input.5-11 Although patient engagement in rounds has been associated with higher patient satisfaction with rounds,12-19 AR and case presentations often occur at a distance from the patient’s bedside.20,21 Furthermore, AR vary in the time allotted per patient and the extent of participation of nurses and other allied health professionals. Standardized bedside rounding processes have been shown to improve efficiency, decrease daily resident work hours,22 and improve nurse-physician teamwork.23

Despite these benefits, recent prospective studies of bedside AR interventions have not improved patient satisfaction with rounds. One involved the implementation of interprofessional patient-centered bedside rounds on a nonteaching service,24 while the other evaluated the impact of integrating athletic principles into multidisciplinary work rounds.25 Work at our institution had sought to develop AR practice recommendations to foster an optimal patient experience, while maintaining provider workflow efficiency, facilitating interdisciplinary communication, and advancing trainee education.26 Using these AR recommendations, we conducted a prospective randomized controlled trial to evaluate the impact of implementing a standardized bedside AR model on patient satisfaction with rounds. We also assessed attending physician and trainee satisfaction with rounds, and perceived and actual AR duration.

METHODS

Setting and Participants

This trial was conducted on the internal medicine teaching service of the University of California San Francisco Medical Center from September 3, 2013 to November 27, 2013. The service is comprised of 8 teams, with a total average daily census of 80 to 90 patients. Teams are comprised of an attending physician, a senior resident (in the second or third year of residency training), 2 interns, and a third- and/or fourth-year medical student.

 

 

This trial, which was approved by the University of California, San Francisco Committee on Human Research (UCSF CHR) and was registered with ClinicalTrials.gov (NCT01931553), was classified under Quality Improvement and did not require informed consent of patients or providers.

Intervention Description

We conducted a cluster randomized trial to evaluate the impact of a bundled set of 5 AR practice recommendations, adapted from published work,26 on patient experience, as well as on attending and trainee satisfaction: 1) huddling to establish the rounding schedule and priorities; 2) conducting bedside rounds; 3) integrating bedside nurses; 4) completing real-time order entry using bedside computers; 5) updating the whiteboard in each patient’s room with care plan information.

At the beginning of each month, study investigators (Nader Najafi and Bradley Monash) led a 1.5-hour workshop to train attending physicians and trainees allocated to the intervention arm on the recommended AR practices. Participants also received informational handouts to be referenced during AR. Attending physicians and trainees randomized to the control arm continued usual rounding practices. Control teams were notified that there would be observers on rounds but were not informed of the study aims.

Randomization and Team Assignments

The medicine service was divided into 2 arms, each comprised of 4 teams. Using a coin flip, Cluster 1 (Teams A, B, C and D) was randomized to the intervention, and Cluster 2 (Teams E, F, G and H) was randomized to the control. This design was pragmatically chosen to ensure that 1 team from each arm would admit patients daily. Allocation concealment of attending physicians and trainees was not possible given the nature of the intervention. Patients were blinded to study arm allocation.

MEASURES AND OUTCOMES

Adherence to Practice Recommendations

Thirty premedical students served as volunteer AR auditors. Each auditor received orientation and training in data collection techniques during a single 2-hour workshop. The auditors, blinded to study arm allocation, independently observed morning AR during weekdays and recorded the completion of the following elements as a dichotomous (yes/no) outcome: pre-rounds huddle, participation of nurse in AR, real-time order entry, and whiteboard use. They recorded the duration of AR per day for each team (minutes) and the rounding model for each patient rounding encounter during AR (bedside, hallway, or card flip).23 Bedside rounds were defined as presentation and discussion of the patient care plan in the presence of the patient. Hallway rounds were defined as presentation and discussion of the patient care plan partially outside the patient’s room and partially in the presence of the patient. Card-flip rounds were defined as presentation and discussion of the patient care plan entirely outside of the patient’s room without the team seeing the patient together. Two auditors simultaneously observed a random subset of patient-rounding encounters to evaluate inter-rater reliability, and the concordance between auditor observations was good (Pearson correlation = 0.66).27

Patient-Related Outcomes

The primary outcome was patient satisfaction with AR, assessed using a survey adapted from published work.12,14,28,29 Patients were approached to complete the questionnaire after they had experienced at least 1 AR. Patients were excluded if they were non-English-speaking, unavailable (eg, off the unit for testing or treatment), in isolation, or had impaired mental status. For patients admitted multiple times during the study period, only the first questionnaire was used. Survey questions included patient involvement in decision-making, quality of communication between patient and medicine team, and the perception that the medicine team cared about the patient. Patients were asked to state their level of agreement with each item on a 5-point Likert scale. We obtained data on patient demographics from administrative datasets.

Healthcare Provider Outcomes

Attending physicians and trainees on service for at least 7 consecutive days were sent an electronic survey, adapted from published work.25,30 Questions assessed satisfaction with AR, perceived value of bedside rounds, and extent of patient and nursing involvement.Level of agreement with each item was captured on a continuous scale; 0 = strongly disagree to 100 = strongly agree, or from 0 (far too little) to 100 (far too much), with 50 equating to “about right.” Attending physicians and trainees were also asked to estimate the average duration of AR (in minutes).

Statistical Analyses

Analyses were blinded to study arm allocation and followed intention-to-treat principles. One attending physician crossed over from intervention to control arm; patient surveys associated with this attending (n = 4) were excluded to avoid contamination. No trainees crossed over.

Demographic and clinical characteristics of patients who completed the survey are reported (Appendix). To compare patient satisfaction scores, we used a random-effects regression model to account for correlation among responses within teams within randomized clusters, defining teams by attending physician. As this correlation was negligible and not statistically significant, we did not adjust ordinary linear regression models for clustering. Given observed differences in patient characteristics, we adjusted for a number of covariates (eg, age, gender, insurance payer, race, marital status, trial group arm).

We conducted simple linear regression for attending and trainee satisfaction comparisons between arms, adjusting only for trainee type (eg, resident, intern, and medical student).

We compared the frequency with which intervention and control teams adhered to the 5 recommended AR practices using chi-square tests. We used independent Student’s t tests to compare total duration of AR by teams within each arm, as well as mean time spent per patient.

This trial had a fixed number of arms (n = 2), each of fixed size (n = 600), based on the average monthly inpatient census on the medicine service. This fixed sample size, with 80% power and α = 0.05, will be able to detect a 0.16 difference in patient satisfaction scores between groups.

All analyses were conducted using SAS® v 9.4 (SAS Institute, Inc., Cary, NC).

 

 

RESULTS

We observed 241 AR involving 1855 patient rounding encounters in the intervention arm and 264 AR involving 1903 patient rounding encounters in the control arm (response rates shown in Figure 1).

Study flow diagram
Figure 1
Intervention teams adopted each of the recommended AR practices at significantly higher rates compared to control teams, with the largest difference occurring for AR occurring at the bedside (52.9% vs. 5.4%; Figure 2).
Prevalence of recommended rounding practices
Figure 2
Teams in the intervention arm demonstrated highest adherence to the pre-rounds huddle (78.1%) and lowest adherence to whiteboard use (29.9%).

Patient Satisfaction and Clinical Outcomes

Five hundred ninety-five patients were allocated to the intervention arm and 605 were allocated to the control arm (Figure 1). Mean age, gender, race, marital status, primary language, and insurance provider did not differ between intervention and control arms (Table 1).

Hospitalized Patient Characteristics by Intervention and Control Arms
Table 1
One hundred forty-six (24.5%) and 141 (23.3%) patients completed surveys in the intervention and control arms, respectively. Patients who completed surveys in each arm were younger and more likely to have commercial insurance (Appendix).

Patients in the intervention arm reported significantly higher satisfaction with AR and felt more cared for by their medicine team (Table 2).
Patient, Attending, and Trainee Satisfaction by Randomized Arm
Table 2
Patient-perceived quality of communication and shared decision-making did not differ between arms.

Actual and Perceived Duration of Attending Rounds

The intervention shortened the total duration of AR by 8 minutes on average (143 vs. 151 minutes, P = 0.052) and the time spent per patient by 4 minutes on average (19 vs. 23 minutes, P < 0.001). Despite this, trainees in the intervention arm perceived AR to last longer (mean estimated time: 167 min vs. 152 min, P < 0.001).

Healthcare Provider Outcomes

We observed 79 attending physicians and trainees in the intervention arm and 78 in the control arm, with survey response rates shown in Figure 1. Attending physicians in the intervention and the control arms reported high levels of satisfaction with the quality of AR (Table 2). Attending physicians in the intervention arm were more likely to report an appropriate level of patient involvement and nurse involvement.

Although trainees in the intervention and control arms reported high levels of satisfaction with the quality of AR, trainees in the intervention arm reported lower satisfaction with AR compared with control arm trainees (Table 2). Trainees in the intervention arm reported that AR involved less autonomy, efficiency, and teaching. Trainees in the intervention arm also scored patient involvement more towards the “far too much” end of the scale compared with “about right” in the control arm. However, trainees in the intervention arm perceived nurse involvement closer to “about right,” as opposed to “far too little” in the control arm.

CONCLUSION/DISCUSSION

Training internal medicine teams to adhere to 5 recommended AR practices increased patient satisfaction with AR and the perception that patients were more cared for by their medicine team. Despite the intervention potentially shortening the duration of AR, attending physicians and trainees perceived AR to last longer, and trainee satisfaction with AR decreased.

Teams in the intervention arm adhered to all recommended rounding practices at higher rates than the control teams. Although intervention teams rounded at the bedside 53% of the time, they were encouraged to bedside round only on patients who desired to participate in rounds, were not altered, and for whom the clinical discussion was not too sensitive to occur at the bedside. Of the recommended rounding behaviors, the lowest adherence was seen with whiteboard use.

A major component of the intervention was to move the clinical presentation to the patient’s bedside. Most patients prefer being included in rounds and partaking in trainee education.12-19,28,29,31-33 Patients may also perceive that more time is spent with them during bedside case presentations,14,28 and exposure to providers conferring on their care may enhance patient confidence in the care being delivered.12 Although a recent study of patient-centered bedside rounding on a nonteaching service did not result in increased patient satisfaction,24 teaching services may offer more opportunities for improvement in care coordination and communication.4

Other aspects of the intervention may have contributed to increased patient satisfaction with AR. The pre-rounds huddle may have helped teams prioritize which patients required more time or would benefit most from bedside rounds. The involvement of nurses in AR may have bolstered communication and team dynamics, enhancing the patient’s perception of interprofessional collaboration. Real-time order entry might have led to more efficient implementation of the care plan, and whiteboard use may have helped to keep patients abreast of the care plan.

Patients in the intervention arm felt more cared for by their medicine teams but did not report improvements in communication or in shared decision-making. Prior work highlights that limited patient engagement, activation, and shared decision-making may occur during AR.24,34 Patient-physician communication during AR is challenged by time pressures and competing priorities, including the “need” for trainees to demonstrate their medical knowledge and clinical skills. Efforts that encourage bedside rounding should include communication training with respect to patient engagement and shared decision-making.

Attending physicians reported positive attitudes toward bedside rounding, consistent with prior studies.13,21,31 However, trainees in the intervention arm expressed decreased satisfaction with AR, estimating that AR took longer and reporting too much patient involvement. Prior studies reflect similar bedside-rounding concerns, including perceived workflow inefficiencies, infringement on teaching opportunities, and time constraints.12,20,35 Trainees are under intense time pressures to complete their work, attend educational conferences, and leave the hospital to attend afternoon clinic or to comply with duty-hour restrictions. Trainees value succinctness,12,35,36 so the perception that intervention AR lasted longer likely contributed to trainee dissatisfaction.

Reduced trainee satisfaction with intervention AR may have also stemmed from the perception of decreased autonomy and less teaching, both valued by trainees.20,35,36 The intervention itself reduced trainee autonomy because usual practice at our hospital involves residents deciding where and how to round. Attending physician presence at the bedside during rounds may have further infringed on trainee autonomy if the patient looked to the attending for answers, or if the attending was seen as the AR leader. Attending physicians may mitigate the risk of compromising trainee autonomy by allowing the trainee to speak first, ensuring the trainee is positioned closer to, and at eye level with, the patient, and redirecting patient questions to the trainee as appropriate. Optimizing trainee experience with bedside AR requires preparation and training of attending physicians, who may feel inadequately prepared to lead bedside rounds and conduct bedside teaching.37 Faculty must learn how to preserve team efficiency, create a safe, nonpunitive bedside environment that fosters the trainee-patient relationship, and ensure rounds remain educational.36,38,39

The intervention reduced the average time spent on AR and time spent per patient. Studies examining the relationship between bedside rounding and duration of rounds have yielded mixed results: some have demonstrated no effect of bedside rounds on rounding time,28,40 while others report longer rounding times.37 The pre-rounds huddle and real-time order writing may have enhanced workflow efficiency.

Our study has several limitations. These results reflect the experience of a single large academic medical center and may not be generalizable to other settings. Although overall patient response to the survey was low and may not be representative of the entire patient population, response rates in the intervention and control arms were equivalent. Non-English speaking patients may have preferences that were not reflected in our survey results, and we did not otherwise quantify individual reasons for survey noncompletion. The presence of auditors on AR may have introduced observer bias. There may have been crossover effect; however, observed prevalence of individual practices remained low in the control arm. The 1.5-hour workshop may have inadequately equipped trainees with the complex skills required to lead and participate in bedside rounding, and more training, experience, and feedback may have yielded different results. For instance, residents with more exposure to bedside rounding express greater appreciation of its role in education and patient care.20 While adherence to some of the recommended practices remained low, we did not employ a full range of change-management techniques. Instead, we opted for a “low intensity” intervention (eg, single workshop, handouts) that relied on voluntary adoption by medicine teams and that we hoped other institutions could reproduce. Finally, we did not assess the relative impact of individual rounding behaviors on the measured outcomes.

In conclusion, training medicine teams to adhere to a standardized bedside AR model increased patient satisfaction with rounds. Concomitant trainee dissatisfaction may require further experience and training of attending physicians and trainees to ensure successful adoption.

Acknowledgements

 

 

We would like to thank all patients, providers, and trainees who participated in this study. We would also like to acknowledge the following volunteer auditors who observed teams daily: Arianna Abundo, Elahhe Afkhamnejad, Yolanda Banuelos, Laila Fozoun, Soe Yupar Khin, Tam Thien Le, Wing Sum Li, Yaqiao Li, Mengyao Liu, Tzyy-Harn Lo, Shynh-Herng Lo, David Lowe, Danoush Paborji, Sa Nan Park, Urmila Powale, Redha Fouad Qabazard, Monique Quiroz, John-Luke Marcelo Rivera, Manfred Roy Luna Salvador, Tobias Gowen Squier-Roper, Flora Yan Ting, Francesca Natasha T. Tizon, Emily Claire Trautner, Stephen Weiner, Alice Wilson, Kimberly Woo, Bingling J Wu, Johnny Wu, Brenda Yee. Statistical expertise was provided by Joan Hilton from the UCSF Clinical and Translational Science Institute (CTSI), which is supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Number UL1 TR000004. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Thanks also to Oralia Schatzman, Andrea Mazzini, and Erika Huie for their administrative support, and John Hillman for data-related support. Special thanks to Kirsten Kangelaris and Andrew Auerbach for their valuable feedback throughout, and to Maria Novelero and Robert Wachter for their divisional support of this project. 

Disclosure

The authors report no financial conflicts of interest.

References

1. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):1-18. PubMed
2. Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) Fact Sheet. August 2013. Centers for Medicare and Medicaid Services (CMS). Baltimore, MD.http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed December 1, 2015.
3. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17:41-48. PubMed

4. Wray CM, Flores A, Padula WV, Prochaska MT, Meltzer DO, Arora VM. Measuring patient experiences on hospitalist and teaching services: Patient responses to a 30-day postdischarge questionnaire. J Hosp Med. 2016;11(2):99-104. PubMed
5. Bharwani AM, Harris GC, Southwick FS. Perspective: A business school view of medical interprofessional rounds: transforming rounding groups into rounding teams. Acad Med. 2012;87(12):1768-1771. PubMed
6. Chand DV. Observational study using the tools of lean six sigma to improve the efficiency of the resident rounding process. J Grad Med Educ. 2011;3(2):144-150. PubMed

7. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
8. Weber H, Stöckli M, Nübling M, Langewitz WA. Communication during ward rounds in internal medicine. An analysis of patient-nurse-physician interactions using RIAS. Patient Educ Couns. 2007;67(3):343-348. PubMed
9. McMahon GT, Katz JT, Thorndike ME, Levy BD, Loscalzo J. Evaluation of a redesign initiative in an internal-medicine residency. N Engl J Med. 2010;362(14):1304-1311. PubMed

10. Amoss J. Attending rounds: where do we go from here?: comment on “Attending rounds in the current era”. JAMA Intern Med. 2013;173(12):1089-1090. PubMed
11. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(suppl 8):AS4-A12. PubMed
12. Wang-Cheng RM, Barnas GP, Sigmann P, Riendl PA, Young MJ. Bedside case presentations: why patients like them but learners don’t. J Gen Intern Med. 1989;4(4):284-287. PubMed

13. Chauke, HL, Pattinson RC. Ward rounds—bedside or conference room? S Afr Med J. 2006;96(5):398-400. PubMed
14. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients’ perceptions of their medical care. N Engl J Med. 1997;336(16):336, 1150-1155. PubMed
15. Simons RJ, Baily RG, Zelis R, Zwillich CW. The physiologic and psychological effects of the bedside presentation. N Engl J Med. 1989;321(18):1273-1275. PubMed

16. Wise TN, Feldheim D, Mann LS, Boyle E, Rustgi VK. Patients’ reactions to house staff work rounds. Psychosomatics. 1985;26(8):669-672. PubMed
17. Linfors EW, Neelon FA. Sounding Boards. The case of bedside rounds. N Engl J Med. 1980;303(21):1230-1233. PubMed
18. Nair BR, Coughlan JL, Hensley MJ. Student and patient perspectives on bedside teaching. Med Educ. 1997;31(5):341-346. PubMed

19. Romano J. Patients’ attitudes and behavior in ward round teaching. JAMA. 1941;117(9):664-667.
20. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105-110. PubMed
21. Shoeb M, Khanna R, Fang M, et al. Internal medicine rounding practices and the Accreditation Council for Graduate Medical Education core competencies. J Hosp Med. 2014;9(4):239-243. PubMed

22. Calderon AS, Blackmore CC, Williams BL, et al. Transforming ward rounds through rounding-in-flow. J Grad Med Educ. 2014;6(4):750-755. PubMed
23. Henkin S, Chon TY, Christopherson ML, Halvorsen AJ, Worden LM, Ratelle JT. Improving nurse-physician teamwork through interprofessional bedside rounding. J Multidiscip Healthc. 2016;9:201-205. PubMed
24. O’Leary KJ, Killarney A, Hansen LO, et al. Effect of patient-centred bedside rounds on hospitalised patients’ decision control, activation and satisfaction with care. BMJ Qual Saf. 2016;25:921-928. PubMed

25. Southwick F, Lewis M, Treloar D, et al. Applying athletic principles to medical rounds to improve teaching and patient care. Acad Med. 2014;89(7):1018-1023. PubMed
26. Najafi N, Monash B, Mourad M, et al. Improving attending rounds: Qualitative reflections from multidisciplinary providers. Hosp Pract (1995). 2015;43(3):186-190. PubMed
27. Altman DG. Practical Statistics For Medical Research. Boca Raton, FL: Chapman & Hall/CRC; 2006.

28. Gonzalo JD, Chuang CH, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792-798. PubMed
29. Fletcher KE, Rankey DS, Stern DT. Bedside interactions from the other side of the bedrail. J Gen Intern Med. 2005;20(1):58-61. PubMed

30. Gatorounds: Applying Championship Athletic Principles to Healthcare. University of Florida Health. http://gatorounds.med.ufl.edu/surveys/. Accessed March 1, 2013.
31. Gonzalo JD, Heist BS, Duffy BL, et al. The value of bedside rounds: a multicenter qualitative study. Teach Learn Med. 2013;25(4):326-333. PubMed
32. Rogers HD, Carline JD, Paauw DS. Examination room presentations in general internal medicine clinic: patients’ and students’ perceptions. Acad Med. 2003;78(9):945-949. PubMed

 

 

33. Fletcher KE, Furney SL, Stern DT. Patients speak: what’s really important about bedside interactions with physician teams. Teach Learn Med. 2007;19(2):120-127. PubMed
34. Satterfield JM, Bereknyei S, Hilton JF, et al. The prevalence of social and behavioral topics and related educational opportunities during attending rounds. Acad Med. 2014; 89(11):1548-1557. PubMed
35. Kroenke K, Simmons JO, Copley JB, Smith C. Attending rounds: a survey of physician attitudes. J Gen Intern Med. 1990;5(3):229-233. PubMed

36. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
37. Crumlish CM, Yialamas MA, McMahon GT. Quantification of bedside teaching by an academic hospitalist group. J Hosp Med. 2009;4(5):304-307. PubMed
38. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient-centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):1040-1047. PubMed

39. Roy B, Castiglioni A, Kraemer RR, et al. Using cognitive mapping to define key domains for successful attending rounds. J Gen Intern Med. 2012;27(11):1492-1498. PubMed
40. Bhansali P, Birch S, Campbell JK, et al. A time-motion study of inpatient rounds using a family-centered rounds model. Hosp Pediatr. 2013;3(1):31-38. PubMed

References

1. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):1-18. PubMed
2. Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) Fact Sheet. August 2013. Centers for Medicare and Medicaid Services (CMS). Baltimore, MD.http://www.hcahpsonline.org/files/August_2013_HCAHPS_Fact_Sheet3.pdf. Accessed December 1, 2015.
3. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17:41-48. PubMed

4. Wray CM, Flores A, Padula WV, Prochaska MT, Meltzer DO, Arora VM. Measuring patient experiences on hospitalist and teaching services: Patient responses to a 30-day postdischarge questionnaire. J Hosp Med. 2016;11(2):99-104. PubMed
5. Bharwani AM, Harris GC, Southwick FS. Perspective: A business school view of medical interprofessional rounds: transforming rounding groups into rounding teams. Acad Med. 2012;87(12):1768-1771. PubMed
6. Chand DV. Observational study using the tools of lean six sigma to improve the efficiency of the resident rounding process. J Grad Med Educ. 2011;3(2):144-150. PubMed

7. Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):1084-1089. PubMed
8. Weber H, Stöckli M, Nübling M, Langewitz WA. Communication during ward rounds in internal medicine. An analysis of patient-nurse-physician interactions using RIAS. Patient Educ Couns. 2007;67(3):343-348. PubMed
9. McMahon GT, Katz JT, Thorndike ME, Levy BD, Loscalzo J. Evaluation of a redesign initiative in an internal-medicine residency. N Engl J Med. 2010;362(14):1304-1311. PubMed

10. Amoss J. Attending rounds: where do we go from here?: comment on “Attending rounds in the current era”. JAMA Intern Med. 2013;173(12):1089-1090. PubMed
11. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(suppl 8):AS4-A12. PubMed
12. Wang-Cheng RM, Barnas GP, Sigmann P, Riendl PA, Young MJ. Bedside case presentations: why patients like them but learners don’t. J Gen Intern Med. 1989;4(4):284-287. PubMed

13. Chauke, HL, Pattinson RC. Ward rounds—bedside or conference room? S Afr Med J. 2006;96(5):398-400. PubMed
14. Lehmann LS, Brancati FL, Chen MC, Roter D, Dobs AS. The effect of bedside case presentations on patients’ perceptions of their medical care. N Engl J Med. 1997;336(16):336, 1150-1155. PubMed
15. Simons RJ, Baily RG, Zelis R, Zwillich CW. The physiologic and psychological effects of the bedside presentation. N Engl J Med. 1989;321(18):1273-1275. PubMed

16. Wise TN, Feldheim D, Mann LS, Boyle E, Rustgi VK. Patients’ reactions to house staff work rounds. Psychosomatics. 1985;26(8):669-672. PubMed
17. Linfors EW, Neelon FA. Sounding Boards. The case of bedside rounds. N Engl J Med. 1980;303(21):1230-1233. PubMed
18. Nair BR, Coughlan JL, Hensley MJ. Student and patient perspectives on bedside teaching. Med Educ. 1997;31(5):341-346. PubMed

19. Romano J. Patients’ attitudes and behavior in ward round teaching. JAMA. 1941;117(9):664-667.
20. Gonzalo JD, Masters PA, Simons RJ, Chuang CH. Attending rounds and bedside case presentations: medical student and medicine resident experiences and attitudes. Teach Learn Med. 2009;21(2):105-110. PubMed
21. Shoeb M, Khanna R, Fang M, et al. Internal medicine rounding practices and the Accreditation Council for Graduate Medical Education core competencies. J Hosp Med. 2014;9(4):239-243. PubMed

22. Calderon AS, Blackmore CC, Williams BL, et al. Transforming ward rounds through rounding-in-flow. J Grad Med Educ. 2014;6(4):750-755. PubMed
23. Henkin S, Chon TY, Christopherson ML, Halvorsen AJ, Worden LM, Ratelle JT. Improving nurse-physician teamwork through interprofessional bedside rounding. J Multidiscip Healthc. 2016;9:201-205. PubMed
24. O’Leary KJ, Killarney A, Hansen LO, et al. Effect of patient-centred bedside rounds on hospitalised patients’ decision control, activation and satisfaction with care. BMJ Qual Saf. 2016;25:921-928. PubMed

25. Southwick F, Lewis M, Treloar D, et al. Applying athletic principles to medical rounds to improve teaching and patient care. Acad Med. 2014;89(7):1018-1023. PubMed
26. Najafi N, Monash B, Mourad M, et al. Improving attending rounds: Qualitative reflections from multidisciplinary providers. Hosp Pract (1995). 2015;43(3):186-190. PubMed
27. Altman DG. Practical Statistics For Medical Research. Boca Raton, FL: Chapman & Hall/CRC; 2006.

28. Gonzalo JD, Chuang CH, Huang G, Smith C. The return of bedside rounds: an educational intervention. J Gen Intern Med. 2010;25(8):792-798. PubMed
29. Fletcher KE, Rankey DS, Stern DT. Bedside interactions from the other side of the bedrail. J Gen Intern Med. 2005;20(1):58-61. PubMed

30. Gatorounds: Applying Championship Athletic Principles to Healthcare. University of Florida Health. http://gatorounds.med.ufl.edu/surveys/. Accessed March 1, 2013.
31. Gonzalo JD, Heist BS, Duffy BL, et al. The value of bedside rounds: a multicenter qualitative study. Teach Learn Med. 2013;25(4):326-333. PubMed
32. Rogers HD, Carline JD, Paauw DS. Examination room presentations in general internal medicine clinic: patients’ and students’ perceptions. Acad Med. 2003;78(9):945-949. PubMed

 

 

33. Fletcher KE, Furney SL, Stern DT. Patients speak: what’s really important about bedside interactions with physician teams. Teach Learn Med. 2007;19(2):120-127. PubMed
34. Satterfield JM, Bereknyei S, Hilton JF, et al. The prevalence of social and behavioral topics and related educational opportunities during attending rounds. Acad Med. 2014; 89(11):1548-1557. PubMed
35. Kroenke K, Simmons JO, Copley JB, Smith C. Attending rounds: a survey of physician attitudes. J Gen Intern Med. 1990;5(3):229-233. PubMed

36. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
37. Crumlish CM, Yialamas MA, McMahon GT. Quantification of bedside teaching by an academic hospitalist group. J Hosp Med. 2009;4(5):304-307. PubMed
38. Gonzalo JD, Wolpaw DR, Lehman E, Chuang CH. Patient-centered interprofessional collaborative care: factors associated with bedside interprofessional rounds. J Gen Intern Med. 2014;29(7):1040-1047. PubMed

39. Roy B, Castiglioni A, Kraemer RR, et al. Using cognitive mapping to define key domains for successful attending rounds. J Gen Intern Med. 2012;27(11):1492-1498. PubMed
40. Bhansali P, Birch S, Campbell JK, et al. A time-motion study of inpatient rounds using a family-centered rounds model. Hosp Pediatr. 2013;3(1):31-38. PubMed

Issue
Journal of Hospital Medicine - 12(3)
Issue
Journal of Hospital Medicine - 12(3)
Page Number
143-149
Page Number
143-149
Publications
Publications
Topics
Article Type
Display Headline
Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial
Display Headline
Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence: 533 Parnassus Avenue, Box 0131, San Francisco, CA 94143; Telephone: 415-476-5928; Fax, 415-502-1963; E-mail: [email protected]
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Use ProPublica
Article PDF Media
Media Files

Effect of an RRT on Resident Perceptions

Article Type
Changed
Sun, 05/21/2017 - 13:32
Display Headline
The effect of a rapid response team on resident perceptions of education and autonomy

Rapid response teams (RRTs) have been promoted by patient safety and quality‐improvement organizations as a strategy to reduce preventable in‐hospital deaths.[1] To date, critical analysis of RRTs has focused primarily on their impact on quality‐of‐care metrics.[2, 3, 4] Comparatively few studies have examined the cultural and educational impact of RRTs, particularly at academic medical centers, and those that do exist have focused almost exclusively on perceptions of nurses rather than resident physicians.[5, 6, 7, 8, 9, 10]

Although a prior study found that internal medicine and general surgery residents believed that RRTs improved patient safety, they were largely ambivalent about the RRT's impact on education and training.[11] To date, there has been no focused assessment of resident physician impressions of an RRT across years of training and medical specialty to inform the use of this multidisciplinary team as a component of their residency education.

We sought to determine whether resident physicians at a tertiary care academic medical center perceive educational benefit from collaboration with the RRT and whether they feel that the RRT adversely affects clinical autonomy.

METHODS

The Hospital

Moffitt‐Long Hospital, the tertiary academic medical center of the University of California, San Francisco (UCSF), is a 600‐bed acute care hospital that provides comprehensive critical care services and serves as a major referral center in northern California. There are roughly 5000 admissions to the hospital annually. At the time the study was conducted, there were approximately 200 RRT calls per 1000 adult hospital discharges.

The Rapid Response Team

The RRT is called to assess, triage, and treat patients who have experienced a decline in their clinical status short of a cardiopulmonary arrest. The RRT has been operational at UCSF since June 1, 2007, and is composed of a dedicated critical care nurse and respiratory therapist available 24 hours a day, 7 days a week. The RRT can be activated by any concerned staff member based on vital sign abnormalities, decreased urine output, changes in mental status, or any significant concern about the trajectory of the patient's clinical course.

When the RRT is called on a given patient, the patient's primary physician (at our institution, a resident) is also called to the bedside and works alongside the RRT to address the patient's acute clinical needs. The primary physician, bedside nurse, and RRT discuss the plan of care for the patient, including clinical evaluation, management, and the need for additional monitoring or a transition to a higher level of care. Residents at our institution receive no formal instruction regarding the role of the RRT or curriculum on interfacing with the RRT, and they do not serve as members of the RRT as part of a clinical rotation.

The Survey Process

Study subjects were asked via e‐mail to participate in a brief online survey. Subjects were offered the opportunity to win a $100 gift certificate in return for their participation. Weekly e‐mail reminders were sent for a period of 3 months or until a given subject had completed the survey. The survey was administered over a 3‐month period, from March through May, to allow time for residents to work with the RRT during the academic year. The Committee on Human Research at the University of California San Francisco Medical Center approved the study.

Target Population

All residents in specialties that involved direct patient care and the potential to use the adult RRT were included in the study. This included residents in the fields of internal medicine, neurology, general surgery, orthopedic surgery, neurosurgery, plastic surgery, urology, and otolaryngology (Table 1). Residents in pediatrics and obstetrics and gynecology were excluded, as emergencies in their patients are addressed by a pediatric RRT and an obstetric anesthesiologist, respectively. Residents in anesthesiology were excluded as they do not care for nonintensive care unit (ICU) patients as part of the primary team and are not involved in RRT encounters.

Demographics of Survey Respondents (N=236)
DemographicNo. (%)
  • NOTE: Abbreviations: RRT, rapid response team; SD, standard deviation.

  • Where data do not equal 100%, this is due to missing data or rounding. Table does not include 10 respondents who had never cared for a patient for whom the RRT was activated.

Medical specialty 
Internal medicine145 (61.4)
Neurology18 (7.6)
General surgery31 (13.1)
Orthopedic surgery17 (7.2)
Neurosurgery4 (1.7)
Plastic surgery2 (0.8)
Urology9 (3.8)
Otolaryngology10 (4.2)
Years of postgraduate trainingAverage 2.34 (SD 1.41)
183 (35.2)
260 (25.4)
355 (23.3)
420 (8.5)
58 (3.4)
65 (2.1)
75 (2.1)
Gender 
Male133 (56.4)
Female102 (43.2)
Had exposure to RRT during training 
Yes106 (44.9)
No127 (53.8)
Had previously initiated a call to the RRT 
Yes106 (44.9)
No128 (54.2)

Survey Design

The resident survey contained 20 RRT‐related items and 7 demographic and practice items. Responses for RRT‐related questions utilized a 5‐point Likert scale ranging from strongly disagree to strongly agree. The survey was piloted prior to administration to check comprehension and interpretation by physicians with experience in survey writing (for the full survey, see Supporting Information, Appendix, in the online version of this article).

Survey Objectives

The survey was designed to capture the experiences of residents who had cared for a patient for whom the RRT had been activated. Data collected included residents' perceptions of the impact of the RRT on their residency education and clinical autonomy, the quality of care provided, patient safety, and hospital‐wide culture. Potential barriers to use of the RRT were also examined.

Outcomes

The study's primary outcomes included the perceived educational benefit of the RRT and its perceived impact on clinical autonomy. Secondary outcomes included the effect of years of training and resident specialty on both the perceived educational benefit and impact on clinical autonomy among our study group.

Statistical Analysis

Responses to each survey item were described for each specialty, and subgroup analysis was conducted. For years of training, that item was dichotomized into either 1 year (henceforth referred to as interns) or greater than 1 year (henceforth referred to as upper‐level residents). Resident specialty was dichotomized into medical fields (internal medicine and neurology) or surgical fields. For statistical analysis, agreement statements were collapsed to either disagree (strongly disagree/disagree), neutral, or agree (strongly agree/agree). The influence of years of resident training and resident specialty was assessed for all items in the survey using 2 or Fisher exact tests as appropriate for the 3 agreement categories. Analysis was conducted using SPSS 21.0 (IBM Corp., Armonk, NY).

RESULTS

There were 246 responses to the survey of a possible 342, yielding a response rate of 72% (Table 2). Ten respondents stated that they had never cared for a patient where the RRT had been activated. Given their lack of exposure to the RRT, these respondents were excluded from the analysis, yielding a final sample size of 236. The demographic and clinical practice characteristics of respondents are shown in Table 1.

Resident Perceptions of the RRT (N=236)
The residentStrongly Disagree/Disagree, n (%)Neutral, n (%)Agree/ Strongly Agree, n (%)
  • NOTE: Abbreviations: RRT, rapid response team.

  • Where data do not equal 100%, this is due to missing data or rounding. Includes only data for respondents who had cared for a patient that required RRT activation.

Is comfortable managing the unstable patient without the RRT104 (44.1)64 (27.1)66 (28.0)
And RRT work together to make treatment decisions10 (4.2)13 (5.5)208 (88.1)
Believes there are fewer opportunities to care for unstable floor patients due to the RRT188 (79.7)26 (11.0)17 (7.2)
Feels less prepared to care for unstable patients due to the RRT201 (85.2)22 (9.3)13 (5.5)
Feels that working with the RRT creates a valuable educational experience9 (3.8)39 (16.5)184 (78.0)
Feels that nurses caring for the unstable patient should always contact them prior to contacting the RRT123 (52.1)33 (14.0)76 (32.2)
Would be unhappy with nurses calling RRT prior to contacting them141 (59.7)44 (18.6)51 (21.6)
Perceives that the presence of RRT decreases residents' autonomy179 (75.8)25 (10.6)28 (11.9)

Demographics and Primary Outcomes

Interns comprised 83 (35%) of the respondents; the average time in postgraduate training was 2.34 years (standard deviation=1.41). Of respondents, 163 (69%) were in medical fields, and 73 (31%) were in surgical fields. Overall responses to the survey are shown in Table 2, and subgroup analysis is shown in Table 3.

Perceptions of the RRT Based on Years of Training and Specialty
The resident1 Year, n=83, n (%)>1 Year, n=153, n (%)P ValueMedical, n=163, n (%)Surgical, n=73, n (%)P Value
  • NOTE: Abbreviations: RRT, rapid response team.

  • Where data do not equal 100%, this is due to missing data or rounding.

Is comfortable managing the unstable patient without the RRT  0.01  <0.01
Strongly disagree/disagree39 (47.6)65 (42.8) 67 (41.6)37 (50.7) 
Neutral29 (35.4)35 (23.0) 56 (34.8)8 (11.0) 
Agree/strongly agree14 (17.1)52 (34.2) 38 (23.6)28 (38.4) 
And RRT work together to make treatment decisions  0.61  0.04
Strongly disagree/disagree2 (2.4)8 (5.4) 4 (2.5)6 (8.7) 
Neutral5 (6.1)8 (5.4) 7 (4.3)6 (8.7) 
Agree/strongly agree75 (91.5)137 (89.3) 151 (93.2)57 (82.6) 
Believes there are fewer opportunities to care for unstable floor patients due to the RRT  0.05  0.04
Strongly disagree/disagree59 (72.8)129 (86.0) 136 (85.5)52 (72.2) 
Neutral13 (16.0)13 (8.7) 15 (9.4)11 (15.3) 
Agree/strongly agree9 (11.1)8 (5.3) 8 (5.0)9 (12.5) 
Feels less prepared to care for unstable patients due to the RRT  <0.01  0.79
Strongly disagree/disagree62 (74.7)139 (90.8) 140 (85.9)61 (83.6) 
Neutral14 (16.9)8 (5.2) 15 (9.2)7 (9.6) 
Agree/Strongly agree7 (8.4)6 (3.9) 8 (4.9)5 (6.8) 
Feels working with the RRT is a valuable educational experience  0.61  0.01
Strongly disagree/disagree2 (2.4)7 (4.7) 2 (1.2)7 (9.9) 
Neutral12 (14.6)27 (18.0) 25 (15.5)14 (19.7) 
Agree/strongly agree68 (82.9)116 (77.3) 134 (83.2)50 (70.4) 
Feels nurses caring for unstable patients should always contact the resident prior to contacting the RRT  0.49  <0.01
Strongly disagree/disagree47 (57.3)76 (50.7) 97 (60.2)26 (36.6) 
Neutral9 (11.0)24 (16.0) 26 (16.1)7 (9.9) 
Agree/strongly agree26 (31.7)50 (33.3) 38 (23.6)38 (53.5) 
Would be unhappy with nurses calling RRT prior to contacting them  0.81  <0.01
Strongly disagree/disagree51 (61.4)90 (58.8) 109 (66.9)32 (43.8) 
Neutral16 (19.3)28 (18.3) 30 (18.4)14 (19.2) 
Agree/strongly agree16 (19.3)35 (22.9) 24 (14.7)27 (37.0) 
Perceives that the presence of the RRT decreases autonomy as a physician  0.95  0.18
Strongly disagree/disagree63 (77.8)116 (76.8) 127 (79.9)52 (71.2) 
Neutral9 (11.1)16 (10.6) 17 (10.7)8 (11.0) 
Agree/strongly agree9 (11.1)19 (12.6) 15 (9.4)13 (17.8) 

Effect of the RRT on Resident Education

Of all residents, 66 (28%) agreed that they felt comfortable managing an unstable patient without the assistance of the RRT. Surgical residents felt more comfortable managing an unstable patient alone (38%) compared medical residents (24%) (P<0.01). Interns felt less comfortable caring for unstable patients without the RRT's assistance (17%) compared with upper‐level residents (34%) (P=0.01).

Residents overall disagreed with the statement that the RRT left them feeling less prepared to care for unstable patients (n=201; 85%). More upper‐level residents disagreed with this assertion (91%) compared with interns (75%) (P<0.01). Responses to this question did not differ significantly between medical and surgical residents.

Upper‐level residents were more likely to disagree with the statement that the RRT resulted in fewer opportunities to care for unstable patients (n=129; 86%) compared with interns (n=59; 73%) (P=0.05). Medical residents were also more likely to disagree with this statement (n=136; 86%) compared with surgical residents (n=52; 72%) (P=0.04).

With respect to residents' overall impressions of the educational value of the RRT, 68 (83%) interns and 116 (77%) upper‐level residents agreed that it provided a valuable educational experience (P=0.61). Medical and surgical residents differed in this regard, with 134 (83%) medical residents and 50 (70%) surgical residents agreeing that the RRT provided a valuable educational experience (P=0.01).

Effect of the RRT on Clinical Autonomy

Of all residents, 123 (52%) disagreed that the bedside nurse should always contact the primary resident prior to calling the RRT; 76 (32%) agreed with this statement. Medicine residents were more likely to disagree with this approach (n=97; 60%) than were surgical residents (n=26; 36%) (P<0.01). There was no difference between interns and upper‐level residents in response to this question. Most of those who disagreed with this statement were medical residents, whereas most surgical residents (n=38; 54%) agreed that they should be contacted first (P<0.01).

There were no differences between interns and upper‐level residents with respect to perceptions of the RRT's impact on clinical autonomy: 11% of interns and 13% of residents agreed that the RRT decreased their clinical autonomy as a physician. There was no significant difference between medical and surgical residents' responses to this question.

The majority of residents (n=208; 88%) agreed that they and the RRT work together to make treatment decisions for patients. This was true regardless of year of training (P=0.61), but it was expressed more often among medical residents than surgical residents (n=151, 93% vs n=57, 83%; P=0.04).

DISCUSSION

Most studies examining the educational and cultural impact of RRTs exist in the nursing literature. These studies demonstrate that medical and surgical nurses are often reluctant to call the RRT for fear of criticism by the patient's physician.[5, 8, 9, 10, 11, 12, 13] In contrast, our data demonstrate that resident physicians across all levels of training and specialties have a positive view of the RRT and its role in patient care. The data support our hypothesis that although most residents perceive educational benefit from their interactions with the RRT, this perception is greater for less‐experienced residents and for those residents who routinely provide care for critically ill patients and serve as code team leaders. In addition, a minority of residents, irrespective of years of training or medical specialty, felt that the RRT negatively impacted their clinical autonomy.

Our data have several important implications. First, although over half of the residents surveyed had not been exposed to RRTs during medical school, and despite having no formal training on the role of the RRT during residency, most residents identified their interactions with the RRT as potential learning opportunities. This finding differs from that of Benin and colleagues, who suggested that RRTs might negatively impact residents' educational development and decrease opportunities for high‐stakes clinical reasoning by allowing the clinical decision‐making process to be driven by the RRT staff rather than the resident.[5] One possible explanation for this discrepancy is the variable makeup of the RRT at different institutions. At our medical center, the RRT is comprised of a critical care nurse and respiratory therapist, whereas at other institutions, the RRT may be led by a resident, fellow, attending hospitalist, or intensivist, any of whom might supersede the primary resident once the RRT is engaged.

In our study, the perceived educational benefit of the RRT was most pronounced with interns. Interns likely derive incrementally greater benefit from each encounter with an acutely decompensating patient than do senior residents, whether the RRT is present or not. Observing the actions of seasoned nurses and respiratory therapists may demonstrate new tools for interns to use in their management of such situations; for example, the RRT may suggest different modes of oxygen delivery or new diagnostic tests. The RRT also likely helps interns navigate the hospital system by assisting with decisions around escalation of care and serving as a liaison to ICU staff.

Our data also have implications for resident perceptions of clinical autonomy. Interns, far less experienced caring for unstable patients than upper‐level residents, expressed more concern about the RRT stripping them of opportunities to do so and about feeling less prepared to handle clinically deteriorating patients. Part of this perception may be due to interns feeling less comfortable taking charge of a patient's care in the presence of an experienced critical care nurse and respiratory therapist, both for reasons related to clinical experience and to a cultural hierarchy that often places the intern at the bottom of the authority spectrum. In addition, when the RRT is called on an intern's patient, the senior resident may accompany the intern to the bedside and guide the intern on his or her approach to the situation; in some cases, the senior resident may take charge, leaving the intern feeling less autonomous.

If training sessions could be developed to address not only clinical decision making, but also multidisciplinary team interactions and roles in the acute care setting, this may mitigate interns' concerns. Such curricula could also enhance residents' experience in interprofessional care, an aspect of clinical training that has become increasingly important in the age of limited duty hours and higher volume, and higher acuity inpatient censuses. An RRT model, like a code blue model, could be used in simulation‐based training to increase both comfort with use of the RRT and efficiency of the RRTresidentnurse team. Although our study did not address specifically residents' perceptions of multidisciplinary teams, this could be a promising area for further study.

For surgical residents, additional factors are likely at play. Surgical residents spend significant time in the operating room, reducing time present at the bedside and hindering the ability to respond swiftly when an RRT is called on their patient. This could cause surgical residents to feel less involved in the care of that patientsupported by our finding that fewer surgical residents felt able to collaborate with the RRTand also to derive less educational benefit and clinical satisfaction from the experience. Differences between medical and surgical postgraduate training also likely play a role, manifest by varying clinical roles and duration of training, and as such it may not be appropriate to draw direct comparisons between respective postgraduate year levels. In addition, differences in patients' medical complexity, varying allegiance to the traditional hierarchy of medical providers, and degree of familiarity with the RRT itself may impact surgical residents' comfort with the RRT.

Limitations of our study include that it was conducted at a single site and addressed a specific population of residents at our tertiary academic center. Though we achieved an excellent response rate, our subspecialty sample sizes were too small to allow for individual comparisons among those groups. Conducting a larger study at multiple institutions where the makeup of the RRT differs could provide further insight into how different clinical environments and different RRT models impact resident perceptions. Finally, we allowed each respondent to interpret both educational benefit and clinical autonomy in the context of their own level of training and clinical practice rather than providing strict definitions of these terms. There is no standardized definition of autonomy in the context of resident clinical practice, and we did not measure direct educational outcomes. Our study design therefore allowed only for measurement of perceptions of these concepts. Measurement of actual educational value of the RRTfor example, through direct clinical observation or by incorporating the RRT experience into an entrustable professional activitywould provide more quantitative evidence of the RRT's utility for our resident population. Future study in this area would help to support the development and ongoing assessment of RRT‐based curricula moving forward.

CONCLUSION

Our data show that resident physicians have a strongly favorable opinion of the RRT at our institution. Future studies should aim to quantify the educational benefit of RRTs for residents and identify areas for curricular development to enhance resident education as RRTs become more pervasive.

Files
References
  1. Institute for Healthcare Improvement. Rapid response teams. Available at: http://www.ihi.org/topics/rapidresponseteams. Accessed May 5, 2014.
  2. Chan PS, Jain R, Nallmothu BK, Berg RA, Sasson C. Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):1826.
  3. Devita MA, Bellomo R, Hillman K, et al. Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):24632478.
  4. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):12381243.
  5. Benin AL, Borgstrom CP, Jenq GY, Roumanis SA, Horwitz LI. Defining impact of a rapid response team: qualitative study with nurses, physicians and hospital administrators. BMJ Qual Saf. 2012;21(5):391398.
  6. Leach LS, Mayo A, O'Rourke M. How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):e13.
  7. Metcalf R, Scott S, Ridgway M, Gibson D. Rapid response team approach to staff satisfaction. Orthop Nurs. 2008;27(5):266271; quiz 272–273.
  8. Salamonson Y, Heere B, Everett B, Davidson P. Voices from the floor: nurses' perceptions of the medical emergency team. Intensive Crit Care Nurs. 2006;22(3):138143.
  9. Shapiro SE, Donaldson NE, Scott MB. Rapid response teams seen through the eyes of the nurse. Am J Nurs. 2010;110(6):2834; quiz 35–36.
  10. Shearer B, Marshall S, Buist MD, et al. What stops hospital clinical staff from following protocols? An analysis of the incidence and factors behind the failure of bedside clinical staff to activate the rapid response system in a multi‐campus Australian metropolitan healthcare service. BMJ Qual Saf. 2012;21(7):569575.
  11. Sarani B, Sonnad S, Bergey MR, et al. Resident and RN perceptions of the impact of a medical emergency team on education and patient safety in an academic medical center. Crit Care Med. 2009;37(12):30913096.
  12. Marshall SD, Kitto S, Shearer W, et al. Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39.
  13. Peebles E, Subbe CP, Hughes P, Gemmell L. Timing and teamwork–an observational pilot study of patients referred to a rapid response team with the aim of identifying factors amenable to re‐design of a rapid response system. Resuscitation. 2012;83(6):782787.
Article PDF
Issue
Journal of Hospital Medicine - 10(1)
Publications
Page Number
8-12
Sections
Files
Files
Article PDF
Article PDF

Rapid response teams (RRTs) have been promoted by patient safety and quality‐improvement organizations as a strategy to reduce preventable in‐hospital deaths.[1] To date, critical analysis of RRTs has focused primarily on their impact on quality‐of‐care metrics.[2, 3, 4] Comparatively few studies have examined the cultural and educational impact of RRTs, particularly at academic medical centers, and those that do exist have focused almost exclusively on perceptions of nurses rather than resident physicians.[5, 6, 7, 8, 9, 10]

Although a prior study found that internal medicine and general surgery residents believed that RRTs improved patient safety, they were largely ambivalent about the RRT's impact on education and training.[11] To date, there has been no focused assessment of resident physician impressions of an RRT across years of training and medical specialty to inform the use of this multidisciplinary team as a component of their residency education.

We sought to determine whether resident physicians at a tertiary care academic medical center perceive educational benefit from collaboration with the RRT and whether they feel that the RRT adversely affects clinical autonomy.

METHODS

The Hospital

Moffitt‐Long Hospital, the tertiary academic medical center of the University of California, San Francisco (UCSF), is a 600‐bed acute care hospital that provides comprehensive critical care services and serves as a major referral center in northern California. There are roughly 5000 admissions to the hospital annually. At the time the study was conducted, there were approximately 200 RRT calls per 1000 adult hospital discharges.

The Rapid Response Team

The RRT is called to assess, triage, and treat patients who have experienced a decline in their clinical status short of a cardiopulmonary arrest. The RRT has been operational at UCSF since June 1, 2007, and is composed of a dedicated critical care nurse and respiratory therapist available 24 hours a day, 7 days a week. The RRT can be activated by any concerned staff member based on vital sign abnormalities, decreased urine output, changes in mental status, or any significant concern about the trajectory of the patient's clinical course.

When the RRT is called on a given patient, the patient's primary physician (at our institution, a resident) is also called to the bedside and works alongside the RRT to address the patient's acute clinical needs. The primary physician, bedside nurse, and RRT discuss the plan of care for the patient, including clinical evaluation, management, and the need for additional monitoring or a transition to a higher level of care. Residents at our institution receive no formal instruction regarding the role of the RRT or curriculum on interfacing with the RRT, and they do not serve as members of the RRT as part of a clinical rotation.

The Survey Process

Study subjects were asked via e‐mail to participate in a brief online survey. Subjects were offered the opportunity to win a $100 gift certificate in return for their participation. Weekly e‐mail reminders were sent for a period of 3 months or until a given subject had completed the survey. The survey was administered over a 3‐month period, from March through May, to allow time for residents to work with the RRT during the academic year. The Committee on Human Research at the University of California San Francisco Medical Center approved the study.

Target Population

All residents in specialties that involved direct patient care and the potential to use the adult RRT were included in the study. This included residents in the fields of internal medicine, neurology, general surgery, orthopedic surgery, neurosurgery, plastic surgery, urology, and otolaryngology (Table 1). Residents in pediatrics and obstetrics and gynecology were excluded, as emergencies in their patients are addressed by a pediatric RRT and an obstetric anesthesiologist, respectively. Residents in anesthesiology were excluded as they do not care for nonintensive care unit (ICU) patients as part of the primary team and are not involved in RRT encounters.

Demographics of Survey Respondents (N=236)
DemographicNo. (%)
  • NOTE: Abbreviations: RRT, rapid response team; SD, standard deviation.

  • Where data do not equal 100%, this is due to missing data or rounding. Table does not include 10 respondents who had never cared for a patient for whom the RRT was activated.

Medical specialty 
Internal medicine145 (61.4)
Neurology18 (7.6)
General surgery31 (13.1)
Orthopedic surgery17 (7.2)
Neurosurgery4 (1.7)
Plastic surgery2 (0.8)
Urology9 (3.8)
Otolaryngology10 (4.2)
Years of postgraduate trainingAverage 2.34 (SD 1.41)
183 (35.2)
260 (25.4)
355 (23.3)
420 (8.5)
58 (3.4)
65 (2.1)
75 (2.1)
Gender 
Male133 (56.4)
Female102 (43.2)
Had exposure to RRT during training 
Yes106 (44.9)
No127 (53.8)
Had previously initiated a call to the RRT 
Yes106 (44.9)
No128 (54.2)

Survey Design

The resident survey contained 20 RRT‐related items and 7 demographic and practice items. Responses for RRT‐related questions utilized a 5‐point Likert scale ranging from strongly disagree to strongly agree. The survey was piloted prior to administration to check comprehension and interpretation by physicians with experience in survey writing (for the full survey, see Supporting Information, Appendix, in the online version of this article).

Survey Objectives

The survey was designed to capture the experiences of residents who had cared for a patient for whom the RRT had been activated. Data collected included residents' perceptions of the impact of the RRT on their residency education and clinical autonomy, the quality of care provided, patient safety, and hospital‐wide culture. Potential barriers to use of the RRT were also examined.

Outcomes

The study's primary outcomes included the perceived educational benefit of the RRT and its perceived impact on clinical autonomy. Secondary outcomes included the effect of years of training and resident specialty on both the perceived educational benefit and impact on clinical autonomy among our study group.

Statistical Analysis

Responses to each survey item were described for each specialty, and subgroup analysis was conducted. For years of training, that item was dichotomized into either 1 year (henceforth referred to as interns) or greater than 1 year (henceforth referred to as upper‐level residents). Resident specialty was dichotomized into medical fields (internal medicine and neurology) or surgical fields. For statistical analysis, agreement statements were collapsed to either disagree (strongly disagree/disagree), neutral, or agree (strongly agree/agree). The influence of years of resident training and resident specialty was assessed for all items in the survey using 2 or Fisher exact tests as appropriate for the 3 agreement categories. Analysis was conducted using SPSS 21.0 (IBM Corp., Armonk, NY).

RESULTS

There were 246 responses to the survey of a possible 342, yielding a response rate of 72% (Table 2). Ten respondents stated that they had never cared for a patient where the RRT had been activated. Given their lack of exposure to the RRT, these respondents were excluded from the analysis, yielding a final sample size of 236. The demographic and clinical practice characteristics of respondents are shown in Table 1.

Resident Perceptions of the RRT (N=236)
The residentStrongly Disagree/Disagree, n (%)Neutral, n (%)Agree/ Strongly Agree, n (%)
  • NOTE: Abbreviations: RRT, rapid response team.

  • Where data do not equal 100%, this is due to missing data or rounding. Includes only data for respondents who had cared for a patient that required RRT activation.

Is comfortable managing the unstable patient without the RRT104 (44.1)64 (27.1)66 (28.0)
And RRT work together to make treatment decisions10 (4.2)13 (5.5)208 (88.1)
Believes there are fewer opportunities to care for unstable floor patients due to the RRT188 (79.7)26 (11.0)17 (7.2)
Feels less prepared to care for unstable patients due to the RRT201 (85.2)22 (9.3)13 (5.5)
Feels that working with the RRT creates a valuable educational experience9 (3.8)39 (16.5)184 (78.0)
Feels that nurses caring for the unstable patient should always contact them prior to contacting the RRT123 (52.1)33 (14.0)76 (32.2)
Would be unhappy with nurses calling RRT prior to contacting them141 (59.7)44 (18.6)51 (21.6)
Perceives that the presence of RRT decreases residents' autonomy179 (75.8)25 (10.6)28 (11.9)

Demographics and Primary Outcomes

Interns comprised 83 (35%) of the respondents; the average time in postgraduate training was 2.34 years (standard deviation=1.41). Of respondents, 163 (69%) were in medical fields, and 73 (31%) were in surgical fields. Overall responses to the survey are shown in Table 2, and subgroup analysis is shown in Table 3.

Perceptions of the RRT Based on Years of Training and Specialty
The resident1 Year, n=83, n (%)>1 Year, n=153, n (%)P ValueMedical, n=163, n (%)Surgical, n=73, n (%)P Value
  • NOTE: Abbreviations: RRT, rapid response team.

  • Where data do not equal 100%, this is due to missing data or rounding.

Is comfortable managing the unstable patient without the RRT  0.01  <0.01
Strongly disagree/disagree39 (47.6)65 (42.8) 67 (41.6)37 (50.7) 
Neutral29 (35.4)35 (23.0) 56 (34.8)8 (11.0) 
Agree/strongly agree14 (17.1)52 (34.2) 38 (23.6)28 (38.4) 
And RRT work together to make treatment decisions  0.61  0.04
Strongly disagree/disagree2 (2.4)8 (5.4) 4 (2.5)6 (8.7) 
Neutral5 (6.1)8 (5.4) 7 (4.3)6 (8.7) 
Agree/strongly agree75 (91.5)137 (89.3) 151 (93.2)57 (82.6) 
Believes there are fewer opportunities to care for unstable floor patients due to the RRT  0.05  0.04
Strongly disagree/disagree59 (72.8)129 (86.0) 136 (85.5)52 (72.2) 
Neutral13 (16.0)13 (8.7) 15 (9.4)11 (15.3) 
Agree/strongly agree9 (11.1)8 (5.3) 8 (5.0)9 (12.5) 
Feels less prepared to care for unstable patients due to the RRT  <0.01  0.79
Strongly disagree/disagree62 (74.7)139 (90.8) 140 (85.9)61 (83.6) 
Neutral14 (16.9)8 (5.2) 15 (9.2)7 (9.6) 
Agree/Strongly agree7 (8.4)6 (3.9) 8 (4.9)5 (6.8) 
Feels working with the RRT is a valuable educational experience  0.61  0.01
Strongly disagree/disagree2 (2.4)7 (4.7) 2 (1.2)7 (9.9) 
Neutral12 (14.6)27 (18.0) 25 (15.5)14 (19.7) 
Agree/strongly agree68 (82.9)116 (77.3) 134 (83.2)50 (70.4) 
Feels nurses caring for unstable patients should always contact the resident prior to contacting the RRT  0.49  <0.01
Strongly disagree/disagree47 (57.3)76 (50.7) 97 (60.2)26 (36.6) 
Neutral9 (11.0)24 (16.0) 26 (16.1)7 (9.9) 
Agree/strongly agree26 (31.7)50 (33.3) 38 (23.6)38 (53.5) 
Would be unhappy with nurses calling RRT prior to contacting them  0.81  <0.01
Strongly disagree/disagree51 (61.4)90 (58.8) 109 (66.9)32 (43.8) 
Neutral16 (19.3)28 (18.3) 30 (18.4)14 (19.2) 
Agree/strongly agree16 (19.3)35 (22.9) 24 (14.7)27 (37.0) 
Perceives that the presence of the RRT decreases autonomy as a physician  0.95  0.18
Strongly disagree/disagree63 (77.8)116 (76.8) 127 (79.9)52 (71.2) 
Neutral9 (11.1)16 (10.6) 17 (10.7)8 (11.0) 
Agree/strongly agree9 (11.1)19 (12.6) 15 (9.4)13 (17.8) 

Effect of the RRT on Resident Education

Of all residents, 66 (28%) agreed that they felt comfortable managing an unstable patient without the assistance of the RRT. Surgical residents felt more comfortable managing an unstable patient alone (38%) compared medical residents (24%) (P<0.01). Interns felt less comfortable caring for unstable patients without the RRT's assistance (17%) compared with upper‐level residents (34%) (P=0.01).

Residents overall disagreed with the statement that the RRT left them feeling less prepared to care for unstable patients (n=201; 85%). More upper‐level residents disagreed with this assertion (91%) compared with interns (75%) (P<0.01). Responses to this question did not differ significantly between medical and surgical residents.

Upper‐level residents were more likely to disagree with the statement that the RRT resulted in fewer opportunities to care for unstable patients (n=129; 86%) compared with interns (n=59; 73%) (P=0.05). Medical residents were also more likely to disagree with this statement (n=136; 86%) compared with surgical residents (n=52; 72%) (P=0.04).

With respect to residents' overall impressions of the educational value of the RRT, 68 (83%) interns and 116 (77%) upper‐level residents agreed that it provided a valuable educational experience (P=0.61). Medical and surgical residents differed in this regard, with 134 (83%) medical residents and 50 (70%) surgical residents agreeing that the RRT provided a valuable educational experience (P=0.01).

Effect of the RRT on Clinical Autonomy

Of all residents, 123 (52%) disagreed that the bedside nurse should always contact the primary resident prior to calling the RRT; 76 (32%) agreed with this statement. Medicine residents were more likely to disagree with this approach (n=97; 60%) than were surgical residents (n=26; 36%) (P<0.01). There was no difference between interns and upper‐level residents in response to this question. Most of those who disagreed with this statement were medical residents, whereas most surgical residents (n=38; 54%) agreed that they should be contacted first (P<0.01).

There were no differences between interns and upper‐level residents with respect to perceptions of the RRT's impact on clinical autonomy: 11% of interns and 13% of residents agreed that the RRT decreased their clinical autonomy as a physician. There was no significant difference between medical and surgical residents' responses to this question.

The majority of residents (n=208; 88%) agreed that they and the RRT work together to make treatment decisions for patients. This was true regardless of year of training (P=0.61), but it was expressed more often among medical residents than surgical residents (n=151, 93% vs n=57, 83%; P=0.04).

DISCUSSION

Most studies examining the educational and cultural impact of RRTs exist in the nursing literature. These studies demonstrate that medical and surgical nurses are often reluctant to call the RRT for fear of criticism by the patient's physician.[5, 8, 9, 10, 11, 12, 13] In contrast, our data demonstrate that resident physicians across all levels of training and specialties have a positive view of the RRT and its role in patient care. The data support our hypothesis that although most residents perceive educational benefit from their interactions with the RRT, this perception is greater for less‐experienced residents and for those residents who routinely provide care for critically ill patients and serve as code team leaders. In addition, a minority of residents, irrespective of years of training or medical specialty, felt that the RRT negatively impacted their clinical autonomy.

Our data have several important implications. First, although over half of the residents surveyed had not been exposed to RRTs during medical school, and despite having no formal training on the role of the RRT during residency, most residents identified their interactions with the RRT as potential learning opportunities. This finding differs from that of Benin and colleagues, who suggested that RRTs might negatively impact residents' educational development and decrease opportunities for high‐stakes clinical reasoning by allowing the clinical decision‐making process to be driven by the RRT staff rather than the resident.[5] One possible explanation for this discrepancy is the variable makeup of the RRT at different institutions. At our medical center, the RRT is comprised of a critical care nurse and respiratory therapist, whereas at other institutions, the RRT may be led by a resident, fellow, attending hospitalist, or intensivist, any of whom might supersede the primary resident once the RRT is engaged.

In our study, the perceived educational benefit of the RRT was most pronounced with interns. Interns likely derive incrementally greater benefit from each encounter with an acutely decompensating patient than do senior residents, whether the RRT is present or not. Observing the actions of seasoned nurses and respiratory therapists may demonstrate new tools for interns to use in their management of such situations; for example, the RRT may suggest different modes of oxygen delivery or new diagnostic tests. The RRT also likely helps interns navigate the hospital system by assisting with decisions around escalation of care and serving as a liaison to ICU staff.

Our data also have implications for resident perceptions of clinical autonomy. Interns, far less experienced caring for unstable patients than upper‐level residents, expressed more concern about the RRT stripping them of opportunities to do so and about feeling less prepared to handle clinically deteriorating patients. Part of this perception may be due to interns feeling less comfortable taking charge of a patient's care in the presence of an experienced critical care nurse and respiratory therapist, both for reasons related to clinical experience and to a cultural hierarchy that often places the intern at the bottom of the authority spectrum. In addition, when the RRT is called on an intern's patient, the senior resident may accompany the intern to the bedside and guide the intern on his or her approach to the situation; in some cases, the senior resident may take charge, leaving the intern feeling less autonomous.

If training sessions could be developed to address not only clinical decision making, but also multidisciplinary team interactions and roles in the acute care setting, this may mitigate interns' concerns. Such curricula could also enhance residents' experience in interprofessional care, an aspect of clinical training that has become increasingly important in the age of limited duty hours and higher volume, and higher acuity inpatient censuses. An RRT model, like a code blue model, could be used in simulation‐based training to increase both comfort with use of the RRT and efficiency of the RRTresidentnurse team. Although our study did not address specifically residents' perceptions of multidisciplinary teams, this could be a promising area for further study.

For surgical residents, additional factors are likely at play. Surgical residents spend significant time in the operating room, reducing time present at the bedside and hindering the ability to respond swiftly when an RRT is called on their patient. This could cause surgical residents to feel less involved in the care of that patientsupported by our finding that fewer surgical residents felt able to collaborate with the RRTand also to derive less educational benefit and clinical satisfaction from the experience. Differences between medical and surgical postgraduate training also likely play a role, manifest by varying clinical roles and duration of training, and as such it may not be appropriate to draw direct comparisons between respective postgraduate year levels. In addition, differences in patients' medical complexity, varying allegiance to the traditional hierarchy of medical providers, and degree of familiarity with the RRT itself may impact surgical residents' comfort with the RRT.

Limitations of our study include that it was conducted at a single site and addressed a specific population of residents at our tertiary academic center. Though we achieved an excellent response rate, our subspecialty sample sizes were too small to allow for individual comparisons among those groups. Conducting a larger study at multiple institutions where the makeup of the RRT differs could provide further insight into how different clinical environments and different RRT models impact resident perceptions. Finally, we allowed each respondent to interpret both educational benefit and clinical autonomy in the context of their own level of training and clinical practice rather than providing strict definitions of these terms. There is no standardized definition of autonomy in the context of resident clinical practice, and we did not measure direct educational outcomes. Our study design therefore allowed only for measurement of perceptions of these concepts. Measurement of actual educational value of the RRTfor example, through direct clinical observation or by incorporating the RRT experience into an entrustable professional activitywould provide more quantitative evidence of the RRT's utility for our resident population. Future study in this area would help to support the development and ongoing assessment of RRT‐based curricula moving forward.

CONCLUSION

Our data show that resident physicians have a strongly favorable opinion of the RRT at our institution. Future studies should aim to quantify the educational benefit of RRTs for residents and identify areas for curricular development to enhance resident education as RRTs become more pervasive.

Rapid response teams (RRTs) have been promoted by patient safety and quality‐improvement organizations as a strategy to reduce preventable in‐hospital deaths.[1] To date, critical analysis of RRTs has focused primarily on their impact on quality‐of‐care metrics.[2, 3, 4] Comparatively few studies have examined the cultural and educational impact of RRTs, particularly at academic medical centers, and those that do exist have focused almost exclusively on perceptions of nurses rather than resident physicians.[5, 6, 7, 8, 9, 10]

Although a prior study found that internal medicine and general surgery residents believed that RRTs improved patient safety, they were largely ambivalent about the RRT's impact on education and training.[11] To date, there has been no focused assessment of resident physician impressions of an RRT across years of training and medical specialty to inform the use of this multidisciplinary team as a component of their residency education.

We sought to determine whether resident physicians at a tertiary care academic medical center perceive educational benefit from collaboration with the RRT and whether they feel that the RRT adversely affects clinical autonomy.

METHODS

The Hospital

Moffitt‐Long Hospital, the tertiary academic medical center of the University of California, San Francisco (UCSF), is a 600‐bed acute care hospital that provides comprehensive critical care services and serves as a major referral center in northern California. There are roughly 5000 admissions to the hospital annually. At the time the study was conducted, there were approximately 200 RRT calls per 1000 adult hospital discharges.

The Rapid Response Team

The RRT is called to assess, triage, and treat patients who have experienced a decline in their clinical status short of a cardiopulmonary arrest. The RRT has been operational at UCSF since June 1, 2007, and is composed of a dedicated critical care nurse and respiratory therapist available 24 hours a day, 7 days a week. The RRT can be activated by any concerned staff member based on vital sign abnormalities, decreased urine output, changes in mental status, or any significant concern about the trajectory of the patient's clinical course.

When the RRT is called on a given patient, the patient's primary physician (at our institution, a resident) is also called to the bedside and works alongside the RRT to address the patient's acute clinical needs. The primary physician, bedside nurse, and RRT discuss the plan of care for the patient, including clinical evaluation, management, and the need for additional monitoring or a transition to a higher level of care. Residents at our institution receive no formal instruction regarding the role of the RRT or curriculum on interfacing with the RRT, and they do not serve as members of the RRT as part of a clinical rotation.

The Survey Process

Study subjects were asked via e‐mail to participate in a brief online survey. Subjects were offered the opportunity to win a $100 gift certificate in return for their participation. Weekly e‐mail reminders were sent for a period of 3 months or until a given subject had completed the survey. The survey was administered over a 3‐month period, from March through May, to allow time for residents to work with the RRT during the academic year. The Committee on Human Research at the University of California San Francisco Medical Center approved the study.

Target Population

All residents in specialties that involved direct patient care and the potential to use the adult RRT were included in the study. This included residents in the fields of internal medicine, neurology, general surgery, orthopedic surgery, neurosurgery, plastic surgery, urology, and otolaryngology (Table 1). Residents in pediatrics and obstetrics and gynecology were excluded, as emergencies in their patients are addressed by a pediatric RRT and an obstetric anesthesiologist, respectively. Residents in anesthesiology were excluded as they do not care for nonintensive care unit (ICU) patients as part of the primary team and are not involved in RRT encounters.

Demographics of Survey Respondents (N=236)
DemographicNo. (%)
  • NOTE: Abbreviations: RRT, rapid response team; SD, standard deviation.

  • Where data do not equal 100%, this is due to missing data or rounding. Table does not include 10 respondents who had never cared for a patient for whom the RRT was activated.

Medical specialty 
Internal medicine145 (61.4)
Neurology18 (7.6)
General surgery31 (13.1)
Orthopedic surgery17 (7.2)
Neurosurgery4 (1.7)
Plastic surgery2 (0.8)
Urology9 (3.8)
Otolaryngology10 (4.2)
Years of postgraduate trainingAverage 2.34 (SD 1.41)
183 (35.2)
260 (25.4)
355 (23.3)
420 (8.5)
58 (3.4)
65 (2.1)
75 (2.1)
Gender 
Male133 (56.4)
Female102 (43.2)
Had exposure to RRT during training 
Yes106 (44.9)
No127 (53.8)
Had previously initiated a call to the RRT 
Yes106 (44.9)
No128 (54.2)

Survey Design

The resident survey contained 20 RRT‐related items and 7 demographic and practice items. Responses for RRT‐related questions utilized a 5‐point Likert scale ranging from strongly disagree to strongly agree. The survey was piloted prior to administration to check comprehension and interpretation by physicians with experience in survey writing (for the full survey, see Supporting Information, Appendix, in the online version of this article).

Survey Objectives

The survey was designed to capture the experiences of residents who had cared for a patient for whom the RRT had been activated. Data collected included residents' perceptions of the impact of the RRT on their residency education and clinical autonomy, the quality of care provided, patient safety, and hospital‐wide culture. Potential barriers to use of the RRT were also examined.

Outcomes

The study's primary outcomes included the perceived educational benefit of the RRT and its perceived impact on clinical autonomy. Secondary outcomes included the effect of years of training and resident specialty on both the perceived educational benefit and impact on clinical autonomy among our study group.

Statistical Analysis

Responses to each survey item were described for each specialty, and subgroup analysis was conducted. For years of training, that item was dichotomized into either 1 year (henceforth referred to as interns) or greater than 1 year (henceforth referred to as upper‐level residents). Resident specialty was dichotomized into medical fields (internal medicine and neurology) or surgical fields. For statistical analysis, agreement statements were collapsed to either disagree (strongly disagree/disagree), neutral, or agree (strongly agree/agree). The influence of years of resident training and resident specialty was assessed for all items in the survey using 2 or Fisher exact tests as appropriate for the 3 agreement categories. Analysis was conducted using SPSS 21.0 (IBM Corp., Armonk, NY).

RESULTS

There were 246 responses to the survey of a possible 342, yielding a response rate of 72% (Table 2). Ten respondents stated that they had never cared for a patient where the RRT had been activated. Given their lack of exposure to the RRT, these respondents were excluded from the analysis, yielding a final sample size of 236. The demographic and clinical practice characteristics of respondents are shown in Table 1.

Resident Perceptions of the RRT (N=236)
The residentStrongly Disagree/Disagree, n (%)Neutral, n (%)Agree/ Strongly Agree, n (%)
  • NOTE: Abbreviations: RRT, rapid response team.

  • Where data do not equal 100%, this is due to missing data or rounding. Includes only data for respondents who had cared for a patient that required RRT activation.

Is comfortable managing the unstable patient without the RRT104 (44.1)64 (27.1)66 (28.0)
And RRT work together to make treatment decisions10 (4.2)13 (5.5)208 (88.1)
Believes there are fewer opportunities to care for unstable floor patients due to the RRT188 (79.7)26 (11.0)17 (7.2)
Feels less prepared to care for unstable patients due to the RRT201 (85.2)22 (9.3)13 (5.5)
Feels that working with the RRT creates a valuable educational experience9 (3.8)39 (16.5)184 (78.0)
Feels that nurses caring for the unstable patient should always contact them prior to contacting the RRT123 (52.1)33 (14.0)76 (32.2)
Would be unhappy with nurses calling RRT prior to contacting them141 (59.7)44 (18.6)51 (21.6)
Perceives that the presence of RRT decreases residents' autonomy179 (75.8)25 (10.6)28 (11.9)

Demographics and Primary Outcomes

Interns comprised 83 (35%) of the respondents; the average time in postgraduate training was 2.34 years (standard deviation=1.41). Of respondents, 163 (69%) were in medical fields, and 73 (31%) were in surgical fields. Overall responses to the survey are shown in Table 2, and subgroup analysis is shown in Table 3.

Perceptions of the RRT Based on Years of Training and Specialty
The resident1 Year, n=83, n (%)>1 Year, n=153, n (%)P ValueMedical, n=163, n (%)Surgical, n=73, n (%)P Value
  • NOTE: Abbreviations: RRT, rapid response team.

  • Where data do not equal 100%, this is due to missing data or rounding.

Is comfortable managing the unstable patient without the RRT  0.01  <0.01
Strongly disagree/disagree39 (47.6)65 (42.8) 67 (41.6)37 (50.7) 
Neutral29 (35.4)35 (23.0) 56 (34.8)8 (11.0) 
Agree/strongly agree14 (17.1)52 (34.2) 38 (23.6)28 (38.4) 
And RRT work together to make treatment decisions  0.61  0.04
Strongly disagree/disagree2 (2.4)8 (5.4) 4 (2.5)6 (8.7) 
Neutral5 (6.1)8 (5.4) 7 (4.3)6 (8.7) 
Agree/strongly agree75 (91.5)137 (89.3) 151 (93.2)57 (82.6) 
Believes there are fewer opportunities to care for unstable floor patients due to the RRT  0.05  0.04
Strongly disagree/disagree59 (72.8)129 (86.0) 136 (85.5)52 (72.2) 
Neutral13 (16.0)13 (8.7) 15 (9.4)11 (15.3) 
Agree/strongly agree9 (11.1)8 (5.3) 8 (5.0)9 (12.5) 
Feels less prepared to care for unstable patients due to the RRT  <0.01  0.79
Strongly disagree/disagree62 (74.7)139 (90.8) 140 (85.9)61 (83.6) 
Neutral14 (16.9)8 (5.2) 15 (9.2)7 (9.6) 
Agree/Strongly agree7 (8.4)6 (3.9) 8 (4.9)5 (6.8) 
Feels working with the RRT is a valuable educational experience  0.61  0.01
Strongly disagree/disagree2 (2.4)7 (4.7) 2 (1.2)7 (9.9) 
Neutral12 (14.6)27 (18.0) 25 (15.5)14 (19.7) 
Agree/strongly agree68 (82.9)116 (77.3) 134 (83.2)50 (70.4) 
Feels nurses caring for unstable patients should always contact the resident prior to contacting the RRT  0.49  <0.01
Strongly disagree/disagree47 (57.3)76 (50.7) 97 (60.2)26 (36.6) 
Neutral9 (11.0)24 (16.0) 26 (16.1)7 (9.9) 
Agree/strongly agree26 (31.7)50 (33.3) 38 (23.6)38 (53.5) 
Would be unhappy with nurses calling RRT prior to contacting them  0.81  <0.01
Strongly disagree/disagree51 (61.4)90 (58.8) 109 (66.9)32 (43.8) 
Neutral16 (19.3)28 (18.3) 30 (18.4)14 (19.2) 
Agree/strongly agree16 (19.3)35 (22.9) 24 (14.7)27 (37.0) 
Perceives that the presence of the RRT decreases autonomy as a physician  0.95  0.18
Strongly disagree/disagree63 (77.8)116 (76.8) 127 (79.9)52 (71.2) 
Neutral9 (11.1)16 (10.6) 17 (10.7)8 (11.0) 
Agree/strongly agree9 (11.1)19 (12.6) 15 (9.4)13 (17.8) 

Effect of the RRT on Resident Education

Of all residents, 66 (28%) agreed that they felt comfortable managing an unstable patient without the assistance of the RRT. Surgical residents felt more comfortable managing an unstable patient alone (38%) compared medical residents (24%) (P<0.01). Interns felt less comfortable caring for unstable patients without the RRT's assistance (17%) compared with upper‐level residents (34%) (P=0.01).

Residents overall disagreed with the statement that the RRT left them feeling less prepared to care for unstable patients (n=201; 85%). More upper‐level residents disagreed with this assertion (91%) compared with interns (75%) (P<0.01). Responses to this question did not differ significantly between medical and surgical residents.

Upper‐level residents were more likely to disagree with the statement that the RRT resulted in fewer opportunities to care for unstable patients (n=129; 86%) compared with interns (n=59; 73%) (P=0.05). Medical residents were also more likely to disagree with this statement (n=136; 86%) compared with surgical residents (n=52; 72%) (P=0.04).

With respect to residents' overall impressions of the educational value of the RRT, 68 (83%) interns and 116 (77%) upper‐level residents agreed that it provided a valuable educational experience (P=0.61). Medical and surgical residents differed in this regard, with 134 (83%) medical residents and 50 (70%) surgical residents agreeing that the RRT provided a valuable educational experience (P=0.01).

Effect of the RRT on Clinical Autonomy

Of all residents, 123 (52%) disagreed that the bedside nurse should always contact the primary resident prior to calling the RRT; 76 (32%) agreed with this statement. Medicine residents were more likely to disagree with this approach (n=97; 60%) than were surgical residents (n=26; 36%) (P<0.01). There was no difference between interns and upper‐level residents in response to this question. Most of those who disagreed with this statement were medical residents, whereas most surgical residents (n=38; 54%) agreed that they should be contacted first (P<0.01).

There were no differences between interns and upper‐level residents with respect to perceptions of the RRT's impact on clinical autonomy: 11% of interns and 13% of residents agreed that the RRT decreased their clinical autonomy as a physician. There was no significant difference between medical and surgical residents' responses to this question.

The majority of residents (n=208; 88%) agreed that they and the RRT work together to make treatment decisions for patients. This was true regardless of year of training (P=0.61), but it was expressed more often among medical residents than surgical residents (n=151, 93% vs n=57, 83%; P=0.04).

DISCUSSION

Most studies examining the educational and cultural impact of RRTs exist in the nursing literature. These studies demonstrate that medical and surgical nurses are often reluctant to call the RRT for fear of criticism by the patient's physician.[5, 8, 9, 10, 11, 12, 13] In contrast, our data demonstrate that resident physicians across all levels of training and specialties have a positive view of the RRT and its role in patient care. The data support our hypothesis that although most residents perceive educational benefit from their interactions with the RRT, this perception is greater for less‐experienced residents and for those residents who routinely provide care for critically ill patients and serve as code team leaders. In addition, a minority of residents, irrespective of years of training or medical specialty, felt that the RRT negatively impacted their clinical autonomy.

Our data have several important implications. First, although over half of the residents surveyed had not been exposed to RRTs during medical school, and despite having no formal training on the role of the RRT during residency, most residents identified their interactions with the RRT as potential learning opportunities. This finding differs from that of Benin and colleagues, who suggested that RRTs might negatively impact residents' educational development and decrease opportunities for high‐stakes clinical reasoning by allowing the clinical decision‐making process to be driven by the RRT staff rather than the resident.[5] One possible explanation for this discrepancy is the variable makeup of the RRT at different institutions. At our medical center, the RRT is comprised of a critical care nurse and respiratory therapist, whereas at other institutions, the RRT may be led by a resident, fellow, attending hospitalist, or intensivist, any of whom might supersede the primary resident once the RRT is engaged.

In our study, the perceived educational benefit of the RRT was most pronounced with interns. Interns likely derive incrementally greater benefit from each encounter with an acutely decompensating patient than do senior residents, whether the RRT is present or not. Observing the actions of seasoned nurses and respiratory therapists may demonstrate new tools for interns to use in their management of such situations; for example, the RRT may suggest different modes of oxygen delivery or new diagnostic tests. The RRT also likely helps interns navigate the hospital system by assisting with decisions around escalation of care and serving as a liaison to ICU staff.

Our data also have implications for resident perceptions of clinical autonomy. Interns, far less experienced caring for unstable patients than upper‐level residents, expressed more concern about the RRT stripping them of opportunities to do so and about feeling less prepared to handle clinically deteriorating patients. Part of this perception may be due to interns feeling less comfortable taking charge of a patient's care in the presence of an experienced critical care nurse and respiratory therapist, both for reasons related to clinical experience and to a cultural hierarchy that often places the intern at the bottom of the authority spectrum. In addition, when the RRT is called on an intern's patient, the senior resident may accompany the intern to the bedside and guide the intern on his or her approach to the situation; in some cases, the senior resident may take charge, leaving the intern feeling less autonomous.

If training sessions could be developed to address not only clinical decision making, but also multidisciplinary team interactions and roles in the acute care setting, this may mitigate interns' concerns. Such curricula could also enhance residents' experience in interprofessional care, an aspect of clinical training that has become increasingly important in the age of limited duty hours and higher volume, and higher acuity inpatient censuses. An RRT model, like a code blue model, could be used in simulation‐based training to increase both comfort with use of the RRT and efficiency of the RRTresidentnurse team. Although our study did not address specifically residents' perceptions of multidisciplinary teams, this could be a promising area for further study.

For surgical residents, additional factors are likely at play. Surgical residents spend significant time in the operating room, reducing time present at the bedside and hindering the ability to respond swiftly when an RRT is called on their patient. This could cause surgical residents to feel less involved in the care of that patientsupported by our finding that fewer surgical residents felt able to collaborate with the RRTand also to derive less educational benefit and clinical satisfaction from the experience. Differences between medical and surgical postgraduate training also likely play a role, manifest by varying clinical roles and duration of training, and as such it may not be appropriate to draw direct comparisons between respective postgraduate year levels. In addition, differences in patients' medical complexity, varying allegiance to the traditional hierarchy of medical providers, and degree of familiarity with the RRT itself may impact surgical residents' comfort with the RRT.

Limitations of our study include that it was conducted at a single site and addressed a specific population of residents at our tertiary academic center. Though we achieved an excellent response rate, our subspecialty sample sizes were too small to allow for individual comparisons among those groups. Conducting a larger study at multiple institutions where the makeup of the RRT differs could provide further insight into how different clinical environments and different RRT models impact resident perceptions. Finally, we allowed each respondent to interpret both educational benefit and clinical autonomy in the context of their own level of training and clinical practice rather than providing strict definitions of these terms. There is no standardized definition of autonomy in the context of resident clinical practice, and we did not measure direct educational outcomes. Our study design therefore allowed only for measurement of perceptions of these concepts. Measurement of actual educational value of the RRTfor example, through direct clinical observation or by incorporating the RRT experience into an entrustable professional activitywould provide more quantitative evidence of the RRT's utility for our resident population. Future study in this area would help to support the development and ongoing assessment of RRT‐based curricula moving forward.

CONCLUSION

Our data show that resident physicians have a strongly favorable opinion of the RRT at our institution. Future studies should aim to quantify the educational benefit of RRTs for residents and identify areas for curricular development to enhance resident education as RRTs become more pervasive.

References
  1. Institute for Healthcare Improvement. Rapid response teams. Available at: http://www.ihi.org/topics/rapidresponseteams. Accessed May 5, 2014.
  2. Chan PS, Jain R, Nallmothu BK, Berg RA, Sasson C. Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):1826.
  3. Devita MA, Bellomo R, Hillman K, et al. Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):24632478.
  4. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):12381243.
  5. Benin AL, Borgstrom CP, Jenq GY, Roumanis SA, Horwitz LI. Defining impact of a rapid response team: qualitative study with nurses, physicians and hospital administrators. BMJ Qual Saf. 2012;21(5):391398.
  6. Leach LS, Mayo A, O'Rourke M. How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):e13.
  7. Metcalf R, Scott S, Ridgway M, Gibson D. Rapid response team approach to staff satisfaction. Orthop Nurs. 2008;27(5):266271; quiz 272–273.
  8. Salamonson Y, Heere B, Everett B, Davidson P. Voices from the floor: nurses' perceptions of the medical emergency team. Intensive Crit Care Nurs. 2006;22(3):138143.
  9. Shapiro SE, Donaldson NE, Scott MB. Rapid response teams seen through the eyes of the nurse. Am J Nurs. 2010;110(6):2834; quiz 35–36.
  10. Shearer B, Marshall S, Buist MD, et al. What stops hospital clinical staff from following protocols? An analysis of the incidence and factors behind the failure of bedside clinical staff to activate the rapid response system in a multi‐campus Australian metropolitan healthcare service. BMJ Qual Saf. 2012;21(7):569575.
  11. Sarani B, Sonnad S, Bergey MR, et al. Resident and RN perceptions of the impact of a medical emergency team on education and patient safety in an academic medical center. Crit Care Med. 2009;37(12):30913096.
  12. Marshall SD, Kitto S, Shearer W, et al. Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39.
  13. Peebles E, Subbe CP, Hughes P, Gemmell L. Timing and teamwork–an observational pilot study of patients referred to a rapid response team with the aim of identifying factors amenable to re‐design of a rapid response system. Resuscitation. 2012;83(6):782787.
References
  1. Institute for Healthcare Improvement. Rapid response teams. Available at: http://www.ihi.org/topics/rapidresponseteams. Accessed May 5, 2014.
  2. Chan PS, Jain R, Nallmothu BK, Berg RA, Sasson C. Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170(1):1826.
  3. Devita MA, Bellomo R, Hillman K, et al. Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):24632478.
  4. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35(5):12381243.
  5. Benin AL, Borgstrom CP, Jenq GY, Roumanis SA, Horwitz LI. Defining impact of a rapid response team: qualitative study with nurses, physicians and hospital administrators. BMJ Qual Saf. 2012;21(5):391398.
  6. Leach LS, Mayo A, O'Rourke M. How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):e13.
  7. Metcalf R, Scott S, Ridgway M, Gibson D. Rapid response team approach to staff satisfaction. Orthop Nurs. 2008;27(5):266271; quiz 272–273.
  8. Salamonson Y, Heere B, Everett B, Davidson P. Voices from the floor: nurses' perceptions of the medical emergency team. Intensive Crit Care Nurs. 2006;22(3):138143.
  9. Shapiro SE, Donaldson NE, Scott MB. Rapid response teams seen through the eyes of the nurse. Am J Nurs. 2010;110(6):2834; quiz 35–36.
  10. Shearer B, Marshall S, Buist MD, et al. What stops hospital clinical staff from following protocols? An analysis of the incidence and factors behind the failure of bedside clinical staff to activate the rapid response system in a multi‐campus Australian metropolitan healthcare service. BMJ Qual Saf. 2012;21(7):569575.
  11. Sarani B, Sonnad S, Bergey MR, et al. Resident and RN perceptions of the impact of a medical emergency team on education and patient safety in an academic medical center. Crit Care Med. 2009;37(12):30913096.
  12. Marshall SD, Kitto S, Shearer W, et al. Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39.
  13. Peebles E, Subbe CP, Hughes P, Gemmell L. Timing and teamwork–an observational pilot study of patients referred to a rapid response team with the aim of identifying factors amenable to re‐design of a rapid response system. Resuscitation. 2012;83(6):782787.
Issue
Journal of Hospital Medicine - 10(1)
Issue
Journal of Hospital Medicine - 10(1)
Page Number
8-12
Page Number
8-12
Publications
Publications
Article Type
Display Headline
The effect of a rapid response team on resident perceptions of education and autonomy
Display Headline
The effect of a rapid response team on resident perceptions of education and autonomy
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Sumant R. Ranji, MD, UCSF Division of Hospital Medicine, 533 Parnassus Avenue, Box 0131, San Francisco, CA 94143‐0131; Telephone: 415‐514‐9256; Fax: 415‐514‐2094; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Hospital to Home Transitions

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
The successes and challenges of hospital to home transitions

Hospital readmissions, which account for a substantial proportion of healthcare expenditures, have increasingly become a focus for hospitals and health systems. Hospitals now assume greater responsibility for population health, and face financial penalties by federal and state agencies that consider readmissions a key measure of the quality of care provided during hospitalization. Consequently, there is broad interest in identifying approaches to reduce hospital reutilization, including emergency department (ED) revisits and hospital readmissions. In this issue of the Journal of Hospital Medicine, Auger et al.[1] report the results of a systematic review, which evaluates the effect of discharge interventions on hospital reutilization among children.

As Auger et al. note, the transition from hospital to home is a vulnerable time for children and their families, with 1 in 5 parents reporting major challenges with such transitions.[2] Auger and colleagues identified 14 studies spanning 3 pediatric disease processes that addressed this issue. The authors concluded that several interventions were potentially effective, but individual studies frequently used multifactorial interventions, precluding determination of discrete elements essential to success. The larger body of care transitions literature in adult populations provides insights for interventions that may benefit pediatric patients, as well as informs future research and quality improvement priorities.

The authors identified some distinct interventions that may successfully decrease hospital reutilization, which share common themes from the adult literature. The first is the use of a dedicated transition coordinator (eg, nurse) or coordinating center to assist with the patient's transition home after discharge. In adult studies, this bridging strategy[3, 4] (ie, use of a dedicated transition coordinator or provider) is initiated during the hospitalization and continues postdischarge in the form of phone calls or home visits. The second theme illustrated in both this pediatric review[1] and adult reviews[3, 4, 5] focuses on enhanced or individualized patient education. Most studies have used a combination of these strategies. For example, the Care Transitions Intervention (one of the best validated adult discharge approaches) uses a transition coach to aid the patient in medication self‐management, creation of a patient‐centered record, scheduling follow‐up appointments, and understanding signs and symptoms of a worsening condition.[6] In a randomized study, this intervention demonstrated a reduction in readmissions within 90 days to 16.7% in the intervention group, compared with 22.5% in the control group.[6] One of the pediatric studies highlighted in the review by Auger et al. achieved a decrease in 14‐day ED revisits from 8% prior to implementation of the program to 2.7% following implementation of the program.[7] This program was for patients discharged from the neonatal intensive care unit and involved a nurse coordinator (similar to a transition coach) who worked closely with families and ensured adequate resources prior to discharge as well as a home visitation program.[7]

Although Auger et al. identify some effective approaches to reducing hospital reutilization after discharge in children, their review and the complementary adult literature bring to light 4 main unresolved questions for hospitalists seeking to improve care transitions: (1) how to dissect diverse and heterogeneous interventions to determine the key driver of success, (2) how to interpret and generally apply interventions from single centers where they may have been tailored to a specific healthcare environment, (3) how to generalize the findings of many disease‐specific interventions to other populations, and (4) how to evaluate the cost and assess the costbenefit of implementing many of the more resource intensive interventions. An example of a heterogeneous intervention addressed in this pediatric systematic review was described by Ng et al.,[8] in which the intervention group received a combination of an enhanced discharge education session, disease‐specific nurse evaluation, an animated education booklet, and postdischarge telephone follow‐up, whereas the control group received a shorter discharge education session, a disease‐specific nurse evaluation only if referred by a physician, a written education booklet, and no telephone follow‐up. Investigators found that intervention patients were less likely to be readmitted or revisit the ED as compared with controls. A similarly multifaceted intervention introduced by Taggart et al.[9] was unable to detect a difference in readmissions or ED revisits. It is unclear whether or not the differences in outcomes were related to differences in the intervention bundle itself or institutional or local contextual factors, thus limiting application to other hospitals. Generalizability of interventions is similarly complicated in adults.

The studies presented in this pediatric review article are specific to 3 disease processes: cancer, asthma, and neonatal intensive care (ie, premature) populations. Beyond these populations, there were no other pediatric conditions that met inclusion criteria, thus limiting the generalizability of the findings. As described by Rennke et al.,[3] adult systematic reviews that have focused only on disease‐specific interventions to reduce hospital reutilization are also difficult to generalize to broader populations. Two of the 3 recent adult transition intervention systematic reviews excluded disease‐specific interventions in an attempt to find more broadly applicable interventions but struggled with the same heterogeneity discussed in this review by Auger et al.[3, 4] Although disease‐specific interventions were included in the third adult systematic review and the evaluation was restricted to randomized controlled trials, the authors still grappled with finding 1 or 2 common, successful intervention components.[5] The fourth unresolved question involves understanding the financial burden of implementing more resource‐intensive interventions such as postdischarge home nurse visits. For example, it may be difficult to justify the business case for hiring a transition coach or initiating home nurse visits when the cost and financial implications are unclear. Neither the pediatric nor adult literature describes this well.

Some of the challenges in identifying effective interventions differ between adult and pediatric populations. Adults tend to have multiple comorbid conditions, making them more medically complex and at greater risk for adverse outcomes, medication errors, and hospital utilization.[10] Although a small subset of the pediatric population with complex chronic medical conditions accounts for a majority of hospital reutilization and cost,[11] most hospitalized pediatric patients are otherwise healthy with acute illnesses.[12] Additionally, pediatric patients have lower overall hospital reutilization rates when compared with adults. Adult 30‐day readmission rates are approximately 20%[13] compared with pediatric patients whose mean 30‐day readmission rate is 6.5%.[14] With readmission being an outcome upon which studies are basing intervention success or failure, the relatively low readmission rates in the pediatric population make shifting that outcome more challenging.

There is also controversy about whether policymakers should be focusing on decreasing 30‐day readmission rates as a measure of success. We believe that efforts should focus on identifying more meaningful outcomes, especially outcomes important to patients and their families. No single metric is likely to be an adequate measure of the quality of care transitions, but a combination of outcome measures could potentially be more informative both for patients and clinicians. Patient satisfaction with the discharge process is measured as part of standard patient experience surveys, and the 3‐question Care Transitions Measure[15] has been validated and endorsed as a measure of patient perception of discharge safety in adult populations. There is a growing consensus that 30‐day readmission rates are lacking as a measure of discharge quality, and therefore, measuring shorter‐term7‐ or 14‐dayreadmission rates along with short‐term ED utilization after discharge would likely be more helpful for identifying care transitions problems. Attention should also be paid to measuring rates of specific adverse events in the postdischarge period, such as adverse drug events or failure to follow up on pending test results, as these failures are often implicated in reutilization.

In reflecting upon the published data on adult and pediatric transitions of care interventions and the lingering unanswered questions, we propose a few considerations for future direction of the field. First, engagement of the primary care provider may be beneficial. In many interventions describing a care transition coordinator, nursing fulfilled this role; however, there are opportunities for the primary care provider to play a greater role in this arena. Second, the use of factorial design in future studies may help elucidate which specific parts of each intervention may be the most crucial.[16] Finally, readmission rates are a controversial quality measure in adults. Pediatric readmissions are relatively uncommon, making it difficult to track measurements and show improvement. Clinicians, patients, and policymakers should prioritize outcome measures that are most meaningful to patients and their families that occur at a much higher rate than that of readmissions.

References
  1. Auger KA, Kenyon CC, Feudtner C, Davis MM. Pediatric hospital discharge interventions to reduce subsequent utilization: a systematic review. J Hosp Med. 2014;9(0):000000.
  2. Co JP, Ferris TG, Marino BL, Homer CJ, Perrin JM. Are hospital characteristics associated with parental views of pediatric inpatient care quality? Pediatrics. 2003;111(2):308314.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Hesselink G, Schoonhoven L, Barach P, et al. Improving patient handovers from hospital to primary care: a systematic review. Ann Intern Med. 2012;157(6):417428.
  6. Coleman EA, Parry C, Chalmers S, Min SJ. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):18221828.
  7. Kotagal UR, Perlstein PH, Gamblian V, Donovan EF, Atherton HD. Description and evaluation of a program for the early discharge of infants from a neonatal intensive care unit. J Pediatr. 1995;127(2):285290.
  8. Ng DK, Chow PY, Lai WP, Chan KC, And BL, So HY. Effect of a structured asthma education program on hospitalized asthmatic children: a randomized controlled study. Pediatr Int. 2006;48(2):158162.
  9. Taggart VS, Zuckerman AE, Sly RM, et al. You Can Control Asthma: evaluation of an asthma education‐program for hospitalized inner‐city children. Patient Educ Couns. 1991;17(1):3547.
  10. Forster AJ, Clark HD, Menard A, et al. Adverse events among medical patients after discharge from hospital. CMAJ. 2004;170(3):345349.
  11. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305(7):682690.
  12. Keren R, Luan X, Localio R, et al. Prioritization of comparative effectiveness research topics in hospital pediatrics. Arch Pediatr Adolesc Med. 2012;166(12):11551164.
  13. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675681.
  14. Berry JG, Toomey SL, Zaslavsky AM, et al. Pediatric readmission prevalence and variability across hospitals. JAMA. 2013;309(4):372380.
  15. Parry C, Mahoney E, Chalmers SA, Coleman EA. Assessing the quality of transitional care: further applications of the care transitions measure. Med Care. 2008;46(3):317322.
  16. Moen RD, Nolan TW, Provost LP. Quality Improvement Through Planned Experimentation. 2nd ed. New York, NY: McGraw‐Hill; 1999.
Article PDF
Issue
Journal of Hospital Medicine - 9(4)
Publications
Page Number
271-273
Sections
Article PDF
Article PDF

Hospital readmissions, which account for a substantial proportion of healthcare expenditures, have increasingly become a focus for hospitals and health systems. Hospitals now assume greater responsibility for population health, and face financial penalties by federal and state agencies that consider readmissions a key measure of the quality of care provided during hospitalization. Consequently, there is broad interest in identifying approaches to reduce hospital reutilization, including emergency department (ED) revisits and hospital readmissions. In this issue of the Journal of Hospital Medicine, Auger et al.[1] report the results of a systematic review, which evaluates the effect of discharge interventions on hospital reutilization among children.

As Auger et al. note, the transition from hospital to home is a vulnerable time for children and their families, with 1 in 5 parents reporting major challenges with such transitions.[2] Auger and colleagues identified 14 studies spanning 3 pediatric disease processes that addressed this issue. The authors concluded that several interventions were potentially effective, but individual studies frequently used multifactorial interventions, precluding determination of discrete elements essential to success. The larger body of care transitions literature in adult populations provides insights for interventions that may benefit pediatric patients, as well as informs future research and quality improvement priorities.

The authors identified some distinct interventions that may successfully decrease hospital reutilization, which share common themes from the adult literature. The first is the use of a dedicated transition coordinator (eg, nurse) or coordinating center to assist with the patient's transition home after discharge. In adult studies, this bridging strategy[3, 4] (ie, use of a dedicated transition coordinator or provider) is initiated during the hospitalization and continues postdischarge in the form of phone calls or home visits. The second theme illustrated in both this pediatric review[1] and adult reviews[3, 4, 5] focuses on enhanced or individualized patient education. Most studies have used a combination of these strategies. For example, the Care Transitions Intervention (one of the best validated adult discharge approaches) uses a transition coach to aid the patient in medication self‐management, creation of a patient‐centered record, scheduling follow‐up appointments, and understanding signs and symptoms of a worsening condition.[6] In a randomized study, this intervention demonstrated a reduction in readmissions within 90 days to 16.7% in the intervention group, compared with 22.5% in the control group.[6] One of the pediatric studies highlighted in the review by Auger et al. achieved a decrease in 14‐day ED revisits from 8% prior to implementation of the program to 2.7% following implementation of the program.[7] This program was for patients discharged from the neonatal intensive care unit and involved a nurse coordinator (similar to a transition coach) who worked closely with families and ensured adequate resources prior to discharge as well as a home visitation program.[7]

Although Auger et al. identify some effective approaches to reducing hospital reutilization after discharge in children, their review and the complementary adult literature bring to light 4 main unresolved questions for hospitalists seeking to improve care transitions: (1) how to dissect diverse and heterogeneous interventions to determine the key driver of success, (2) how to interpret and generally apply interventions from single centers where they may have been tailored to a specific healthcare environment, (3) how to generalize the findings of many disease‐specific interventions to other populations, and (4) how to evaluate the cost and assess the costbenefit of implementing many of the more resource intensive interventions. An example of a heterogeneous intervention addressed in this pediatric systematic review was described by Ng et al.,[8] in which the intervention group received a combination of an enhanced discharge education session, disease‐specific nurse evaluation, an animated education booklet, and postdischarge telephone follow‐up, whereas the control group received a shorter discharge education session, a disease‐specific nurse evaluation only if referred by a physician, a written education booklet, and no telephone follow‐up. Investigators found that intervention patients were less likely to be readmitted or revisit the ED as compared with controls. A similarly multifaceted intervention introduced by Taggart et al.[9] was unable to detect a difference in readmissions or ED revisits. It is unclear whether or not the differences in outcomes were related to differences in the intervention bundle itself or institutional or local contextual factors, thus limiting application to other hospitals. Generalizability of interventions is similarly complicated in adults.

The studies presented in this pediatric review article are specific to 3 disease processes: cancer, asthma, and neonatal intensive care (ie, premature) populations. Beyond these populations, there were no other pediatric conditions that met inclusion criteria, thus limiting the generalizability of the findings. As described by Rennke et al.,[3] adult systematic reviews that have focused only on disease‐specific interventions to reduce hospital reutilization are also difficult to generalize to broader populations. Two of the 3 recent adult transition intervention systematic reviews excluded disease‐specific interventions in an attempt to find more broadly applicable interventions but struggled with the same heterogeneity discussed in this review by Auger et al.[3, 4] Although disease‐specific interventions were included in the third adult systematic review and the evaluation was restricted to randomized controlled trials, the authors still grappled with finding 1 or 2 common, successful intervention components.[5] The fourth unresolved question involves understanding the financial burden of implementing more resource‐intensive interventions such as postdischarge home nurse visits. For example, it may be difficult to justify the business case for hiring a transition coach or initiating home nurse visits when the cost and financial implications are unclear. Neither the pediatric nor adult literature describes this well.

Some of the challenges in identifying effective interventions differ between adult and pediatric populations. Adults tend to have multiple comorbid conditions, making them more medically complex and at greater risk for adverse outcomes, medication errors, and hospital utilization.[10] Although a small subset of the pediatric population with complex chronic medical conditions accounts for a majority of hospital reutilization and cost,[11] most hospitalized pediatric patients are otherwise healthy with acute illnesses.[12] Additionally, pediatric patients have lower overall hospital reutilization rates when compared with adults. Adult 30‐day readmission rates are approximately 20%[13] compared with pediatric patients whose mean 30‐day readmission rate is 6.5%.[14] With readmission being an outcome upon which studies are basing intervention success or failure, the relatively low readmission rates in the pediatric population make shifting that outcome more challenging.

There is also controversy about whether policymakers should be focusing on decreasing 30‐day readmission rates as a measure of success. We believe that efforts should focus on identifying more meaningful outcomes, especially outcomes important to patients and their families. No single metric is likely to be an adequate measure of the quality of care transitions, but a combination of outcome measures could potentially be more informative both for patients and clinicians. Patient satisfaction with the discharge process is measured as part of standard patient experience surveys, and the 3‐question Care Transitions Measure[15] has been validated and endorsed as a measure of patient perception of discharge safety in adult populations. There is a growing consensus that 30‐day readmission rates are lacking as a measure of discharge quality, and therefore, measuring shorter‐term7‐ or 14‐dayreadmission rates along with short‐term ED utilization after discharge would likely be more helpful for identifying care transitions problems. Attention should also be paid to measuring rates of specific adverse events in the postdischarge period, such as adverse drug events or failure to follow up on pending test results, as these failures are often implicated in reutilization.

In reflecting upon the published data on adult and pediatric transitions of care interventions and the lingering unanswered questions, we propose a few considerations for future direction of the field. First, engagement of the primary care provider may be beneficial. In many interventions describing a care transition coordinator, nursing fulfilled this role; however, there are opportunities for the primary care provider to play a greater role in this arena. Second, the use of factorial design in future studies may help elucidate which specific parts of each intervention may be the most crucial.[16] Finally, readmission rates are a controversial quality measure in adults. Pediatric readmissions are relatively uncommon, making it difficult to track measurements and show improvement. Clinicians, patients, and policymakers should prioritize outcome measures that are most meaningful to patients and their families that occur at a much higher rate than that of readmissions.

Hospital readmissions, which account for a substantial proportion of healthcare expenditures, have increasingly become a focus for hospitals and health systems. Hospitals now assume greater responsibility for population health, and face financial penalties by federal and state agencies that consider readmissions a key measure of the quality of care provided during hospitalization. Consequently, there is broad interest in identifying approaches to reduce hospital reutilization, including emergency department (ED) revisits and hospital readmissions. In this issue of the Journal of Hospital Medicine, Auger et al.[1] report the results of a systematic review, which evaluates the effect of discharge interventions on hospital reutilization among children.

As Auger et al. note, the transition from hospital to home is a vulnerable time for children and their families, with 1 in 5 parents reporting major challenges with such transitions.[2] Auger and colleagues identified 14 studies spanning 3 pediatric disease processes that addressed this issue. The authors concluded that several interventions were potentially effective, but individual studies frequently used multifactorial interventions, precluding determination of discrete elements essential to success. The larger body of care transitions literature in adult populations provides insights for interventions that may benefit pediatric patients, as well as informs future research and quality improvement priorities.

The authors identified some distinct interventions that may successfully decrease hospital reutilization, which share common themes from the adult literature. The first is the use of a dedicated transition coordinator (eg, nurse) or coordinating center to assist with the patient's transition home after discharge. In adult studies, this bridging strategy[3, 4] (ie, use of a dedicated transition coordinator or provider) is initiated during the hospitalization and continues postdischarge in the form of phone calls or home visits. The second theme illustrated in both this pediatric review[1] and adult reviews[3, 4, 5] focuses on enhanced or individualized patient education. Most studies have used a combination of these strategies. For example, the Care Transitions Intervention (one of the best validated adult discharge approaches) uses a transition coach to aid the patient in medication self‐management, creation of a patient‐centered record, scheduling follow‐up appointments, and understanding signs and symptoms of a worsening condition.[6] In a randomized study, this intervention demonstrated a reduction in readmissions within 90 days to 16.7% in the intervention group, compared with 22.5% in the control group.[6] One of the pediatric studies highlighted in the review by Auger et al. achieved a decrease in 14‐day ED revisits from 8% prior to implementation of the program to 2.7% following implementation of the program.[7] This program was for patients discharged from the neonatal intensive care unit and involved a nurse coordinator (similar to a transition coach) who worked closely with families and ensured adequate resources prior to discharge as well as a home visitation program.[7]

Although Auger et al. identify some effective approaches to reducing hospital reutilization after discharge in children, their review and the complementary adult literature bring to light 4 main unresolved questions for hospitalists seeking to improve care transitions: (1) how to dissect diverse and heterogeneous interventions to determine the key driver of success, (2) how to interpret and generally apply interventions from single centers where they may have been tailored to a specific healthcare environment, (3) how to generalize the findings of many disease‐specific interventions to other populations, and (4) how to evaluate the cost and assess the costbenefit of implementing many of the more resource intensive interventions. An example of a heterogeneous intervention addressed in this pediatric systematic review was described by Ng et al.,[8] in which the intervention group received a combination of an enhanced discharge education session, disease‐specific nurse evaluation, an animated education booklet, and postdischarge telephone follow‐up, whereas the control group received a shorter discharge education session, a disease‐specific nurse evaluation only if referred by a physician, a written education booklet, and no telephone follow‐up. Investigators found that intervention patients were less likely to be readmitted or revisit the ED as compared with controls. A similarly multifaceted intervention introduced by Taggart et al.[9] was unable to detect a difference in readmissions or ED revisits. It is unclear whether or not the differences in outcomes were related to differences in the intervention bundle itself or institutional or local contextual factors, thus limiting application to other hospitals. Generalizability of interventions is similarly complicated in adults.

The studies presented in this pediatric review article are specific to 3 disease processes: cancer, asthma, and neonatal intensive care (ie, premature) populations. Beyond these populations, there were no other pediatric conditions that met inclusion criteria, thus limiting the generalizability of the findings. As described by Rennke et al.,[3] adult systematic reviews that have focused only on disease‐specific interventions to reduce hospital reutilization are also difficult to generalize to broader populations. Two of the 3 recent adult transition intervention systematic reviews excluded disease‐specific interventions in an attempt to find more broadly applicable interventions but struggled with the same heterogeneity discussed in this review by Auger et al.[3, 4] Although disease‐specific interventions were included in the third adult systematic review and the evaluation was restricted to randomized controlled trials, the authors still grappled with finding 1 or 2 common, successful intervention components.[5] The fourth unresolved question involves understanding the financial burden of implementing more resource‐intensive interventions such as postdischarge home nurse visits. For example, it may be difficult to justify the business case for hiring a transition coach or initiating home nurse visits when the cost and financial implications are unclear. Neither the pediatric nor adult literature describes this well.

Some of the challenges in identifying effective interventions differ between adult and pediatric populations. Adults tend to have multiple comorbid conditions, making them more medically complex and at greater risk for adverse outcomes, medication errors, and hospital utilization.[10] Although a small subset of the pediatric population with complex chronic medical conditions accounts for a majority of hospital reutilization and cost,[11] most hospitalized pediatric patients are otherwise healthy with acute illnesses.[12] Additionally, pediatric patients have lower overall hospital reutilization rates when compared with adults. Adult 30‐day readmission rates are approximately 20%[13] compared with pediatric patients whose mean 30‐day readmission rate is 6.5%.[14] With readmission being an outcome upon which studies are basing intervention success or failure, the relatively low readmission rates in the pediatric population make shifting that outcome more challenging.

There is also controversy about whether policymakers should be focusing on decreasing 30‐day readmission rates as a measure of success. We believe that efforts should focus on identifying more meaningful outcomes, especially outcomes important to patients and their families. No single metric is likely to be an adequate measure of the quality of care transitions, but a combination of outcome measures could potentially be more informative both for patients and clinicians. Patient satisfaction with the discharge process is measured as part of standard patient experience surveys, and the 3‐question Care Transitions Measure[15] has been validated and endorsed as a measure of patient perception of discharge safety in adult populations. There is a growing consensus that 30‐day readmission rates are lacking as a measure of discharge quality, and therefore, measuring shorter‐term7‐ or 14‐dayreadmission rates along with short‐term ED utilization after discharge would likely be more helpful for identifying care transitions problems. Attention should also be paid to measuring rates of specific adverse events in the postdischarge period, such as adverse drug events or failure to follow up on pending test results, as these failures are often implicated in reutilization.

In reflecting upon the published data on adult and pediatric transitions of care interventions and the lingering unanswered questions, we propose a few considerations for future direction of the field. First, engagement of the primary care provider may be beneficial. In many interventions describing a care transition coordinator, nursing fulfilled this role; however, there are opportunities for the primary care provider to play a greater role in this arena. Second, the use of factorial design in future studies may help elucidate which specific parts of each intervention may be the most crucial.[16] Finally, readmission rates are a controversial quality measure in adults. Pediatric readmissions are relatively uncommon, making it difficult to track measurements and show improvement. Clinicians, patients, and policymakers should prioritize outcome measures that are most meaningful to patients and their families that occur at a much higher rate than that of readmissions.

References
  1. Auger KA, Kenyon CC, Feudtner C, Davis MM. Pediatric hospital discharge interventions to reduce subsequent utilization: a systematic review. J Hosp Med. 2014;9(0):000000.
  2. Co JP, Ferris TG, Marino BL, Homer CJ, Perrin JM. Are hospital characteristics associated with parental views of pediatric inpatient care quality? Pediatrics. 2003;111(2):308314.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Hesselink G, Schoonhoven L, Barach P, et al. Improving patient handovers from hospital to primary care: a systematic review. Ann Intern Med. 2012;157(6):417428.
  6. Coleman EA, Parry C, Chalmers S, Min SJ. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):18221828.
  7. Kotagal UR, Perlstein PH, Gamblian V, Donovan EF, Atherton HD. Description and evaluation of a program for the early discharge of infants from a neonatal intensive care unit. J Pediatr. 1995;127(2):285290.
  8. Ng DK, Chow PY, Lai WP, Chan KC, And BL, So HY. Effect of a structured asthma education program on hospitalized asthmatic children: a randomized controlled study. Pediatr Int. 2006;48(2):158162.
  9. Taggart VS, Zuckerman AE, Sly RM, et al. You Can Control Asthma: evaluation of an asthma education‐program for hospitalized inner‐city children. Patient Educ Couns. 1991;17(1):3547.
  10. Forster AJ, Clark HD, Menard A, et al. Adverse events among medical patients after discharge from hospital. CMAJ. 2004;170(3):345349.
  11. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305(7):682690.
  12. Keren R, Luan X, Localio R, et al. Prioritization of comparative effectiveness research topics in hospital pediatrics. Arch Pediatr Adolesc Med. 2012;166(12):11551164.
  13. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675681.
  14. Berry JG, Toomey SL, Zaslavsky AM, et al. Pediatric readmission prevalence and variability across hospitals. JAMA. 2013;309(4):372380.
  15. Parry C, Mahoney E, Chalmers SA, Coleman EA. Assessing the quality of transitional care: further applications of the care transitions measure. Med Care. 2008;46(3):317322.
  16. Moen RD, Nolan TW, Provost LP. Quality Improvement Through Planned Experimentation. 2nd ed. New York, NY: McGraw‐Hill; 1999.
References
  1. Auger KA, Kenyon CC, Feudtner C, Davis MM. Pediatric hospital discharge interventions to reduce subsequent utilization: a systematic review. J Hosp Med. 2014;9(0):000000.
  2. Co JP, Ferris TG, Marino BL, Homer CJ, Perrin JM. Are hospital characteristics associated with parental views of pediatric inpatient care quality? Pediatrics. 2003;111(2):308314.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Hesselink G, Schoonhoven L, Barach P, et al. Improving patient handovers from hospital to primary care: a systematic review. Ann Intern Med. 2012;157(6):417428.
  6. Coleman EA, Parry C, Chalmers S, Min SJ. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):18221828.
  7. Kotagal UR, Perlstein PH, Gamblian V, Donovan EF, Atherton HD. Description and evaluation of a program for the early discharge of infants from a neonatal intensive care unit. J Pediatr. 1995;127(2):285290.
  8. Ng DK, Chow PY, Lai WP, Chan KC, And BL, So HY. Effect of a structured asthma education program on hospitalized asthmatic children: a randomized controlled study. Pediatr Int. 2006;48(2):158162.
  9. Taggart VS, Zuckerman AE, Sly RM, et al. You Can Control Asthma: evaluation of an asthma education‐program for hospitalized inner‐city children. Patient Educ Couns. 1991;17(1):3547.
  10. Forster AJ, Clark HD, Menard A, et al. Adverse events among medical patients after discharge from hospital. CMAJ. 2004;170(3):345349.
  11. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305(7):682690.
  12. Keren R, Luan X, Localio R, et al. Prioritization of comparative effectiveness research topics in hospital pediatrics. Arch Pediatr Adolesc Med. 2012;166(12):11551164.
  13. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675681.
  14. Berry JG, Toomey SL, Zaslavsky AM, et al. Pediatric readmission prevalence and variability across hospitals. JAMA. 2013;309(4):372380.
  15. Parry C, Mahoney E, Chalmers SA, Coleman EA. Assessing the quality of transitional care: further applications of the care transitions measure. Med Care. 2008;46(3):317322.
  16. Moen RD, Nolan TW, Provost LP. Quality Improvement Through Planned Experimentation. 2nd ed. New York, NY: McGraw‐Hill; 1999.
Issue
Journal of Hospital Medicine - 9(4)
Issue
Journal of Hospital Medicine - 9(4)
Page Number
271-273
Page Number
271-273
Publications
Publications
Article Type
Display Headline
The successes and challenges of hospital to home transitions
Display Headline
The successes and challenges of hospital to home transitions
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Samir S. Shah, MD, Division of Hospital Medicine, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., MLC 9016, Cincinnati, OH 45229‐3039; Telephone: 513–636‐0409; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Overnight Resident Supervision

Article Type
Changed
Mon, 05/22/2017 - 18:35
Display Headline
Effects of increased overnight supervision on resident education, decision‐making, and autonomy

Postgraduate medical education has traditionally relied on a training model of progressive independence, where housestaff learn patient care through increasing autonomy and decreasing levels of supervision.1 While this framework has little empirical backing, it is grounded in sound educational theory from similar disciplines and endorsed by medical associations.1, 2 The Accreditation Council for Graduate Medical Education (ACGME) recently implemented regulations requiring that first‐year residents have a qualified supervisor physically present or immediately available at all times.3 Previously, oversight by an offsite supervisor (for example, an attending physician at home) was considered adequate. These new regulations, although motivated by patient safety imperatives,4 have elicited concerns that increased supervision may lead to decreased housestaff autonomy and an increased reliance on supervisors for clinical guidance.5 Such changes could ultimately produce less qualified practitioners by the completion of training.

Critics of the current training model point to a patient safety mechanism where housestaff must take responsibility for requesting attending‐level help when situations arise that surpass their skill level.5 For resident physicians, however, the decision to request support is often complex and dependent not only on the clinical question, but also on unique and variable trainee and supervisor factors.6 Survey data from 1999, prior to the current training regulations, showed that increased faculty presence improved resident reports of educational value, quality of patient care, and autonomy.7 A recent survey, performed after the initiation of overnight attending supervision at an academic medical center, demonstrated perceived improvements in educational value and patient‐level outcomes by both faculty and housestaff.8 Whether increased supervision and resident autonomy can coexist remains undetermined.

Overnight rotations for residents (commonly referred to as night float) are often times of little direct or indirect supervision. A recent systematic review of clinical supervision practices for housestaff in all fields found scarce literature on overnight supervision practices.9 There remains limited and conflicting data regarding the quality of patient care provided by the resident night float,10 as well as evidence revealing a low perceived educational value of night rotations when compared with non‐night float rotations.11 Yet in 2006, more than three‐quarters of all internal medicine programs employed night float rotations.12 In response to ACGME guidelines mandating decreased shift lengths with continued restrictions on overall duty hours, it appears likely even more training programs will implement night float systems.

The presence of overnight hospitalists (also known as nocturnists) is growing within the academic setting, yet their role in relation to trainees is either poorly defined13 or independent of housestaff.14 To better understand the impact of increasing levels of supervision on residency training, we investigated housestaff perceptions of education, autonomy, and clinical decision‐making before and after implementation of an in‐hospital, overnight attending physician (nocturnist).

METHODS

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 170 housestaff. At our institution, all first year residents perform a week of intern night float consisting of overnight cross‐coverage of general medicine patients on the floor, step‐down, and intensive care units (ICUs). Second and third year residents each complete 4 to 6 days of resident night float each year at this hospital. They are responsible for assisting the intern night float with cross‐coverage, in addition to admitting general medicine patients to the floor, step‐down unit, and intensive care units. Every night at our medical center, 1 intern night float and 1 resident night float are on duty in the hospital; this is in addition to a resident from the on‐call medicine team and a resident working in the ICU. Prior to July 2010, no internal medicine attending physicians were physically present in the hospital at night. Oversight for the intern and resident night float was provided by the attending physician for the on‐call resident ward team, who was at home and available by pager. The night float housestaff were instructed to contact the responsible attending physician only when a major change in clinical status occurred for hospitalized or newly admitted patients, though this expectation was neither standardized nor monitored.

We established a nocturnist program at the start of the 2010 academic year. The position was staffed by hospitalists from within the Division of Hospital Medicine without the use of moonlighters. Two‐thirds of shifts were filled by 3 dedicated nocturnists with remaining staffing provided by junior hospitalist faculty. The dedicated nocturnists had recently completed their internal medicine residency at our institution. Shift length was 12 hours and dedicated nocturnists worked, on average, 10 shifts per month. The nocturnist filled a critical overnight safety role through mandatory bedside staffing of newly admitted ICU patients within 2 hours of admission, discussion in person or via telephone of newly admitted step‐down unit patients within 6 hours of admission, and direct or indirect supervision of the care of any patients undergoing a major change in clinical status. The overnight hospitalist was also available for clinical questions and to assist housestaff with triaging of overnight admissions. After nocturnist implementation, overnight housestaff received direct supervision or had immediate access to direct supervision, while prior to the nocturnist, residents had access only to indirect supervision.

In addition, the nocturnist admitted medicine patients after 1 AM in a 1:1 ratio with the admitting night float resident, performed medical consults, and provided coverage of non‐teaching medicine services. While actual volume numbers were not obtained, the estimated average of resident admissions per night was 2 to 3, and the number of nocturnist admissions was 1 to 2. The nocturnist also met nightly with night float housestaff for half‐hour didactics focusing on the management of common overnight clinical scenarios. The role of the new nocturnist was described to all housestaff in orientation materials given prior to their night float rotation and their general medicine ward rotation.

We administered pre‐rolling surveys and post‐rolling surveys of internal medicine intern and resident physicians who underwent the night float rotation at our hospital during the 2010 to 2011 academic year. Surveys examined housestaff perceptions of the night float rotation with regard to supervisory roles, educational and clinical value, and clinical decision‐making prior to and after implementation of the nocturnist. Surveys were designed by the study investigators based on prior literature,1, 510 personal experience, and housestaff suggestion, and were refined during works‐in‐progress meetings. Surveys were composed of Likert‐style questions asking housestaff to rate their level of agreement (15, strongly disagree to strongly agree) with statements regarding the supervisory and educational experience of the night float rotation, and to judge their frequency of contact (15, never to always/nightly) with an attending physician for specific clinical scenarios. The clinical scenarios described situations dealing with attendingresident communication around transfers of care, diagnostic evaluation, therapeutic interventions, and adverse events. Scenarios were taken from previous literature describing supervision preferences of faculty and residents during times of critical clinical decision‐making.15

One week prior to the beginning their night float rotation for the 20102011 academic year, housestaff were sent an e‐mail request to complete an online survey asking about their night float rotation during the prior academic year, when no nocturnist was present. One week after completion of their night float rotation for the 20102011 academic year, housestaff received an e‐mail with a link to a post‐survey asking about their recently completed, nocturnist‐supervised, night float rotation. First year residents received only a post‐survey at the completion of their night float rotation, as they would be unable to reflect on prior experience.

Informed consent was imbedded within the e‐mail survey request. Survey requests were sent by a fellow within the Division of Hospital Medicine with a brief message cosigned by an associate program director of the residency program. We did not collect unique identifiers from respondents in order to offer additional assurances to the participants that the survey was anonymous. There was no incentive offered for completion of the survey. Survey data were anonymous and downloaded to a database by a third party. Data were analyzed using Microsoft Excel, and pre‐responses and post‐responses compared using a Student t test. The study was approved by the medical center's Institutional Review Board.

RESULTS

Rates of response for pre‐surveys and post‐surveys were 57% (43 respondents) and 51% (53 respondents), respectively. Due to response rates and in order to convey accurately the perceptions of the training program as a whole, we collapsed responses of the pre‐surveys and post‐surveys based on level of training. After implementation of the overnight attending, we observed a significant increase in the perceived clinical value of the night float rotation (3.95 vs 4.27, P = 0.01) as well as in the adequacy of overnight supervision (3.65 vs 4.30, P < 0.0001; Table 1). There was no reported change in housestaff decision‐making autonomy (4.35 vs 4.45, P = 0.44). In addition, we noted a nonsignificant trend towards an increased perception of the night float rotation as a valuable educational experience (3.83 vs 4.04, P = 0.24). After implementation of the nocturnist, more resident physicians agreed that overnight supervision by an attending positively impacted patient outcomes (3.79 vs 4.30, P = 0.002).

General Perceptions of the Night Float Rotation
StatementPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Night float is a valuable educational rotation3.83 (0.81)4.04 (0.83)0.24
Night float is a valuable clinical rotation3.95 (0.65)4.27 (0.59)0.01
I have adequate overnight supervision3.65 (0.76)4.30 (0.72)<0.0001
I have sufficient autonomy to make clinical decisions4.35 (0.57)4.45 (0.60)0.44
Overnight supervision by an attending positively impacts patient outcomes3.79 (0.88)4.30 (0.74)0.002

After implementation of the nocturnist, night float providers demonstrated increased rates of contacting an attending physician overnight (Table 2). There were significantly greater rates of attending contact for transfers from outside facilities (2.00 vs 3.20, P = 0.006) and during times of adverse events (2.51 vs 3.25, P = 0.04). We observed a reported increase in attending contact prior to ordering invasive diagnostic procedures (1.75 vs 2.76, P = 0.004) and noninvasive diagnostic procedures (1.09 vs 1.31, P = 0.03), as well as prior to initiation of intravenous antibiotics (1.11 vs 1.47, P = 0.007) and vasopressors (1.52 vs 2.40, P = 0.004).

Self‐Reported Incidence of Overnight Attending Contact During Critical Decision‐Making
ScenarioPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 51) Mean (SD)P Value
  • NOTE: Responses are never contact (1) to always contact (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Receive transfer from outside facility2.00 (1.27)3.20 (1.58)0.006
Prior to ordering noninvasive diagnostic procedure1.09 (0.29)1.31 (0.58)0.03
Prior to ordering an invasive procedure1.75 (0.84)2.76 (1.45)0.004
Prior to initiation of intravenous antibiotics1.11 (0.32)1.47 (0.76)0.007
Prior to initiation of vasopressors1.52 (0.82)2.40 (1.49)0.004
Patient experiencing adverse event, regardless of cause2.51 (1.31)3.25 (1.34)0.04

After initiating the program, the nocturnist became the most commonly contacted overnight provider by the night float housestaff (Table 3). We observed a decrease in peer to peer contact between the night float housestaff and the on‐call overnight resident after implementation of the nocturnist (2.67 vs 2.04, P = 0.006).

Self‐Reported Incidence of Night Float Contact With Overnight Providers for Patient Care
ProviderPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are never (1) to nightly (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: ICU, intensive care unit; PMD, primary medical doctor; SD, standard deviation.

ICU Fellow1.86 (0.70)1.86 (0.83)0.96
On‐call resident2.67 (0.89)2.04 (0.92)0.006
ICU resident2.14 (0.74)2.04 (0.91)0.56
On‐call medicine attending1.41 (0.79)1.26 (0.52)0.26
Patient's PMD1.27 (0.31)1.15 (0.41)0.31
Referring MD1.32 (0.60)1.15 (0.45)0.11
Nocturnist 3.59 (1.22) 

Attending presence led to increased agreement that there was a defined overnight attending to contact (2.97 vs 1.96, P < 0.0001) and a decreased fear of waking an attending overnight for assistance (3.26 vs 2.72, P = 0.03). Increased attending availability, however, did not change resident physician's fear of revealing knowledge gaps, their desire to make decisions independently, or their belief that contacting an attending would not change a patient's outcome (Table 4).

Reasons Night Float Housestaff Do Not Contact an Attending Physician
StatementPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 52) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

No defined attending to contact2.97 (1.35)1.96 (0.92)<0.0001
Fear of waking an attending3.26 (1.25)2.72 (1.09)0.03
Fear of revealing knowledge gaps2.26 (1.14)2.25 (0.96)0.95
Would rather make decision on own3.40 (0.93)3.03 (1.06)0.08
Will not change patient outcome3.26 (1.06)3.21 (1.03)0.81

DISCUSSION

The ACGME's new duty hour regulations require that supervision for first‐year residents be provided by a qualified physician (advanced resident, fellow, or attending physician) who is physically present at the hospital. Our study demonstrates that increased direct overnight supervision provided by an in‐house nocturnist enhanced the clinical value of the night float rotation and the perceived quality of patient care. In our study, increased attending supervision did not reduce perceived decision‐making autonomy, and in fact led to increased rates of attending contact during times of critical clinical decision‐making. Such results may help assuage fears that recent regulations mandating enhanced attending supervision will produce less capable practitioners, and offers reassurance that such changes are positively impacting patient care.

Many academic institutions are implementing nocturnists, although their precise roles and responsibilities are still being defined. Our nocturnist program was explicitly designed with housestaff supervision as a core responsibility, with the goal of improving patient safety and housestaff education overnight. We found that availability barriers to attending contact were logically decreased with in‐house faculty presence. Potentially harmful attitudes, however, around requesting support (such as fear of revealing knowledge gaps or the desire to make decisions independently) remained. Furthermore, despite statistically significant increases in contact between faculty and residents at times of critical decision‐making, overall rates of attending contact for diagnostic and therapeutic interventions remained low. It is unknown from our study or previous research, however, what level of contact is appropriate or ideal for many clinical scenarios.

Additionally, we described a novel role of an academic nocturnist at a tertiary care teaching hospital and offered a potential template for the development of academic nocturnists at similar institutions seeking to increase direct overnight supervision. Such roles have not been previously well defined in the literature. Based on our experience, the nocturnist's role was manageable and well utilized by housestaff, particularly for assistance with critically ill patients and overnight triaging. We believe there are a number of factors associated with the success of this role. First, clear guidelines were presented to housestaff and nocturnists regarding expectations for supervision (for example, staffing ICU admissions within 2 hours). These guidelines likely contributed to the increased attending contact observed during critical clinical decision‐making, as well as the perceived improved patient outcomes by our housestaff. Second, the nocturnists were expected to be an integral part of the overnight care team. In many systems, the nocturnists act completely independently of the housestaff teams, creating an additional barrier to contact and communication. In our system, because of clear guidelines and their integral role in staffing overnight admissions, the nocturnists were an essential partner in care for the housestaff. Third, most of the nocturnists had recently completed their residency training at this institution. Although our survey does not directly address this, we believe their knowledge of the hospital, appreciation of the role of the intern and the resident within our system, and understanding of the need to preserve housestaff autonomy were essential to building a successful nocturnist role. Lastly, the nocturnists were not only expected to supervise and staff new admissions, but were also given a teaching expectation. We believe they were viewed by housestaff as qualified teaching attendings, similar to the daytime hospitalist. These findings may provide guidelines for other institutions seeking to balance overnight hospitalist supervision with preserving resident's ability to make autonomous decisions.

There are several limitations to our study. The findings represent the experience of internal medicine housestaff at a single academic, tertiary care medical center and may not be reflective of other institutions or specialties. We asked housestaff to recall night float experiences from the prior year, which may have introduced recall bias, though responses were obtained before participants underwent the new curriculum. Maturation of housestaff over time could have led to changes in perceived autonomy, value of the night float rotation, and rates of attending contact independent of nocturnist implementation. In addition, there may have been unaccounted changes to other elements of the residency program, hospital, or patient volume between rotations. The implementation of the nocturnist, however, was the only major change to our training program that academic year, and there were no significant changes in patient volume, structure of the teaching or non‐resident services, or other policies around resident supervision.

It is possible that the nocturnist may have contributed to reports of increased clinical value and perceived quality of patient care simply by decreasing overnight workload for housestaff, and enhanced supervision and teaching may have played a lesser role. Even if this were true, optimizing resident workload is in itself an important goal for teaching hospitals and residency programs alike in order to maximize patient safety. Inclusion of intern post‐rotation surveys may have influenced data; though, we had no reason to suspect the surveyed interns would respond in a different manner than prior resident groups. The responses of both junior and senior housestaff were pooled; while this potentially weighted the results in favor of higher responding groups, we felt that it conveyed the residents' accurate sentiments on the program. Finally, while we compared two models of overnight supervision, we reported only housestaff perceptions of education, autonomy, patient outcomes, and supervisory contact, and not direct measures of knowledge or patient care. Further research will be required to define the relationship between supervision practices and patient‐level clinical outcomes.

The new ACGME regulations around resident supervision, as well as the broader movement to improve the safety and quality of care, require residency programs to negotiate a delicate balance between providing high‐quality patient care while preserving graduated independence in clinical training. Our study demonstrates that increased overnight supervision by nocturnists with well‐defined supervisory and teaching roles can preserve housestaff autonomy, improve the clinical experience for trainees, increase access to support during times of critical decision‐making, and potentially lead to improved patient outcomes.

Acknowledgements

Disclosures: No authors received commercial support for the submitted work. Dr Arora reports being an editorial board member for Agency for Healthcare Research and Quality (AHRQ) Web M&M, receiving grants from the ACGME for previous work, and receiving payment for speaking on graduate medical education supervision.

Files
References
  1. Kennedy TJ,Regehr G,Baker GR,Lingard LA.Progressive independence in clinical training: a tradition worth defending?Acad Med.2005;80(10 suppl):S106S111.
  2. Joint Committee of the Group on Resident Affairs and Organization of Resident Representatives.Patient Safety and Graduate Medical Education.Washington, DC:Association of American Medical Colleges; February2003:6.
  3. Accreditation Council on Graduate Medical Education.Common Program Requirements. Available at: http://www.acgme.org/acWebsite/home/Common_Program_Requirements_07012011.pdf. Accessed October 16,2011.
  4. The IOM medical errors report: 5 years later, the journey continues.Qual Lett Health Lead.2005;17(1):210.
  5. Bush RW.Supervision in medical education: logical fallacies and clear choices.J Grad Med Educ.2010;2(1):141143.
  6. Kennedy TJ,Regehr G,Baker GR,Lingard L.Preserving professional credibility: grounded theory study of medical trainees' requests for clinical support.BMJ.2009;338:b128.
  7. Phy MP,Offord KP,Manning DM,Bundrick JB,Huddleston JM.Increased faculty presence on inpatient teaching services.Mayo Clin Proc.2004;79(3):332336.
  8. Trowbridge RL,Almeder L,Jacquet M,Fairfield KM.The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service.J Grad Med Educ.2010;2(1):5356.
  9. Farnan JM,Petty LA,Georgitis E, et al.A systematic review: the effect of clinical supervision on patient and residency education outcomes.Acad Med.2012;87(4):428442.
  10. Jasti H,Hanusa BH,Switzer GE,Granieri R,Elnicki M.Residents' perceptions of a night float system.BMC Med Educ.2009;9:52.
  11. Luks AM,Smith CS,Robins L,Wipf JE.Resident perceptions of the educational value of night float rotations.Teach Learn Med.2010;22(3):196201.
  12. Wallach SL,Alam K,Diaz N,Shine D.How do internal medicine residency programs evaluate their resident float experiences?South Med J.2006;99(9):919923.
  13. Beasley BW,McBride J,McDonald FS.Hospitalist involvement in internal medicine residencies.J Hosp Med.2009;4(8):471475.
  14. Ogden PE,Sibbitt S,Howell M, et al.Complying with ACGME resident duty hour restrictions: restructuring the 80 hour workweek to enhance education and patient safety at Texas A81(12):10261031.
  15. Farnan JM,Johnson JK,Meltzer DO,Humphrey HJ,Arora VM.On‐call supervision and resident autonomy: from micromanager to absentee attending.Am J Med.2009;122(8):784788.
Article PDF
Issue
Journal of Hospital Medicine - 7(8)
Publications
Page Number
606-610
Sections
Files
Files
Article PDF
Article PDF

Postgraduate medical education has traditionally relied on a training model of progressive independence, where housestaff learn patient care through increasing autonomy and decreasing levels of supervision.1 While this framework has little empirical backing, it is grounded in sound educational theory from similar disciplines and endorsed by medical associations.1, 2 The Accreditation Council for Graduate Medical Education (ACGME) recently implemented regulations requiring that first‐year residents have a qualified supervisor physically present or immediately available at all times.3 Previously, oversight by an offsite supervisor (for example, an attending physician at home) was considered adequate. These new regulations, although motivated by patient safety imperatives,4 have elicited concerns that increased supervision may lead to decreased housestaff autonomy and an increased reliance on supervisors for clinical guidance.5 Such changes could ultimately produce less qualified practitioners by the completion of training.

Critics of the current training model point to a patient safety mechanism where housestaff must take responsibility for requesting attending‐level help when situations arise that surpass their skill level.5 For resident physicians, however, the decision to request support is often complex and dependent not only on the clinical question, but also on unique and variable trainee and supervisor factors.6 Survey data from 1999, prior to the current training regulations, showed that increased faculty presence improved resident reports of educational value, quality of patient care, and autonomy.7 A recent survey, performed after the initiation of overnight attending supervision at an academic medical center, demonstrated perceived improvements in educational value and patient‐level outcomes by both faculty and housestaff.8 Whether increased supervision and resident autonomy can coexist remains undetermined.

Overnight rotations for residents (commonly referred to as night float) are often times of little direct or indirect supervision. A recent systematic review of clinical supervision practices for housestaff in all fields found scarce literature on overnight supervision practices.9 There remains limited and conflicting data regarding the quality of patient care provided by the resident night float,10 as well as evidence revealing a low perceived educational value of night rotations when compared with non‐night float rotations.11 Yet in 2006, more than three‐quarters of all internal medicine programs employed night float rotations.12 In response to ACGME guidelines mandating decreased shift lengths with continued restrictions on overall duty hours, it appears likely even more training programs will implement night float systems.

The presence of overnight hospitalists (also known as nocturnists) is growing within the academic setting, yet their role in relation to trainees is either poorly defined13 or independent of housestaff.14 To better understand the impact of increasing levels of supervision on residency training, we investigated housestaff perceptions of education, autonomy, and clinical decision‐making before and after implementation of an in‐hospital, overnight attending physician (nocturnist).

METHODS

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 170 housestaff. At our institution, all first year residents perform a week of intern night float consisting of overnight cross‐coverage of general medicine patients on the floor, step‐down, and intensive care units (ICUs). Second and third year residents each complete 4 to 6 days of resident night float each year at this hospital. They are responsible for assisting the intern night float with cross‐coverage, in addition to admitting general medicine patients to the floor, step‐down unit, and intensive care units. Every night at our medical center, 1 intern night float and 1 resident night float are on duty in the hospital; this is in addition to a resident from the on‐call medicine team and a resident working in the ICU. Prior to July 2010, no internal medicine attending physicians were physically present in the hospital at night. Oversight for the intern and resident night float was provided by the attending physician for the on‐call resident ward team, who was at home and available by pager. The night float housestaff were instructed to contact the responsible attending physician only when a major change in clinical status occurred for hospitalized or newly admitted patients, though this expectation was neither standardized nor monitored.

We established a nocturnist program at the start of the 2010 academic year. The position was staffed by hospitalists from within the Division of Hospital Medicine without the use of moonlighters. Two‐thirds of shifts were filled by 3 dedicated nocturnists with remaining staffing provided by junior hospitalist faculty. The dedicated nocturnists had recently completed their internal medicine residency at our institution. Shift length was 12 hours and dedicated nocturnists worked, on average, 10 shifts per month. The nocturnist filled a critical overnight safety role through mandatory bedside staffing of newly admitted ICU patients within 2 hours of admission, discussion in person or via telephone of newly admitted step‐down unit patients within 6 hours of admission, and direct or indirect supervision of the care of any patients undergoing a major change in clinical status. The overnight hospitalist was also available for clinical questions and to assist housestaff with triaging of overnight admissions. After nocturnist implementation, overnight housestaff received direct supervision or had immediate access to direct supervision, while prior to the nocturnist, residents had access only to indirect supervision.

In addition, the nocturnist admitted medicine patients after 1 AM in a 1:1 ratio with the admitting night float resident, performed medical consults, and provided coverage of non‐teaching medicine services. While actual volume numbers were not obtained, the estimated average of resident admissions per night was 2 to 3, and the number of nocturnist admissions was 1 to 2. The nocturnist also met nightly with night float housestaff for half‐hour didactics focusing on the management of common overnight clinical scenarios. The role of the new nocturnist was described to all housestaff in orientation materials given prior to their night float rotation and their general medicine ward rotation.

We administered pre‐rolling surveys and post‐rolling surveys of internal medicine intern and resident physicians who underwent the night float rotation at our hospital during the 2010 to 2011 academic year. Surveys examined housestaff perceptions of the night float rotation with regard to supervisory roles, educational and clinical value, and clinical decision‐making prior to and after implementation of the nocturnist. Surveys were designed by the study investigators based on prior literature,1, 510 personal experience, and housestaff suggestion, and were refined during works‐in‐progress meetings. Surveys were composed of Likert‐style questions asking housestaff to rate their level of agreement (15, strongly disagree to strongly agree) with statements regarding the supervisory and educational experience of the night float rotation, and to judge their frequency of contact (15, never to always/nightly) with an attending physician for specific clinical scenarios. The clinical scenarios described situations dealing with attendingresident communication around transfers of care, diagnostic evaluation, therapeutic interventions, and adverse events. Scenarios were taken from previous literature describing supervision preferences of faculty and residents during times of critical clinical decision‐making.15

One week prior to the beginning their night float rotation for the 20102011 academic year, housestaff were sent an e‐mail request to complete an online survey asking about their night float rotation during the prior academic year, when no nocturnist was present. One week after completion of their night float rotation for the 20102011 academic year, housestaff received an e‐mail with a link to a post‐survey asking about their recently completed, nocturnist‐supervised, night float rotation. First year residents received only a post‐survey at the completion of their night float rotation, as they would be unable to reflect on prior experience.

Informed consent was imbedded within the e‐mail survey request. Survey requests were sent by a fellow within the Division of Hospital Medicine with a brief message cosigned by an associate program director of the residency program. We did not collect unique identifiers from respondents in order to offer additional assurances to the participants that the survey was anonymous. There was no incentive offered for completion of the survey. Survey data were anonymous and downloaded to a database by a third party. Data were analyzed using Microsoft Excel, and pre‐responses and post‐responses compared using a Student t test. The study was approved by the medical center's Institutional Review Board.

RESULTS

Rates of response for pre‐surveys and post‐surveys were 57% (43 respondents) and 51% (53 respondents), respectively. Due to response rates and in order to convey accurately the perceptions of the training program as a whole, we collapsed responses of the pre‐surveys and post‐surveys based on level of training. After implementation of the overnight attending, we observed a significant increase in the perceived clinical value of the night float rotation (3.95 vs 4.27, P = 0.01) as well as in the adequacy of overnight supervision (3.65 vs 4.30, P < 0.0001; Table 1). There was no reported change in housestaff decision‐making autonomy (4.35 vs 4.45, P = 0.44). In addition, we noted a nonsignificant trend towards an increased perception of the night float rotation as a valuable educational experience (3.83 vs 4.04, P = 0.24). After implementation of the nocturnist, more resident physicians agreed that overnight supervision by an attending positively impacted patient outcomes (3.79 vs 4.30, P = 0.002).

General Perceptions of the Night Float Rotation
StatementPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Night float is a valuable educational rotation3.83 (0.81)4.04 (0.83)0.24
Night float is a valuable clinical rotation3.95 (0.65)4.27 (0.59)0.01
I have adequate overnight supervision3.65 (0.76)4.30 (0.72)<0.0001
I have sufficient autonomy to make clinical decisions4.35 (0.57)4.45 (0.60)0.44
Overnight supervision by an attending positively impacts patient outcomes3.79 (0.88)4.30 (0.74)0.002

After implementation of the nocturnist, night float providers demonstrated increased rates of contacting an attending physician overnight (Table 2). There were significantly greater rates of attending contact for transfers from outside facilities (2.00 vs 3.20, P = 0.006) and during times of adverse events (2.51 vs 3.25, P = 0.04). We observed a reported increase in attending contact prior to ordering invasive diagnostic procedures (1.75 vs 2.76, P = 0.004) and noninvasive diagnostic procedures (1.09 vs 1.31, P = 0.03), as well as prior to initiation of intravenous antibiotics (1.11 vs 1.47, P = 0.007) and vasopressors (1.52 vs 2.40, P = 0.004).

Self‐Reported Incidence of Overnight Attending Contact During Critical Decision‐Making
ScenarioPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 51) Mean (SD)P Value
  • NOTE: Responses are never contact (1) to always contact (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Receive transfer from outside facility2.00 (1.27)3.20 (1.58)0.006
Prior to ordering noninvasive diagnostic procedure1.09 (0.29)1.31 (0.58)0.03
Prior to ordering an invasive procedure1.75 (0.84)2.76 (1.45)0.004
Prior to initiation of intravenous antibiotics1.11 (0.32)1.47 (0.76)0.007
Prior to initiation of vasopressors1.52 (0.82)2.40 (1.49)0.004
Patient experiencing adverse event, regardless of cause2.51 (1.31)3.25 (1.34)0.04

After initiating the program, the nocturnist became the most commonly contacted overnight provider by the night float housestaff (Table 3). We observed a decrease in peer to peer contact between the night float housestaff and the on‐call overnight resident after implementation of the nocturnist (2.67 vs 2.04, P = 0.006).

Self‐Reported Incidence of Night Float Contact With Overnight Providers for Patient Care
ProviderPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are never (1) to nightly (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: ICU, intensive care unit; PMD, primary medical doctor; SD, standard deviation.

ICU Fellow1.86 (0.70)1.86 (0.83)0.96
On‐call resident2.67 (0.89)2.04 (0.92)0.006
ICU resident2.14 (0.74)2.04 (0.91)0.56
On‐call medicine attending1.41 (0.79)1.26 (0.52)0.26
Patient's PMD1.27 (0.31)1.15 (0.41)0.31
Referring MD1.32 (0.60)1.15 (0.45)0.11
Nocturnist 3.59 (1.22) 

Attending presence led to increased agreement that there was a defined overnight attending to contact (2.97 vs 1.96, P < 0.0001) and a decreased fear of waking an attending overnight for assistance (3.26 vs 2.72, P = 0.03). Increased attending availability, however, did not change resident physician's fear of revealing knowledge gaps, their desire to make decisions independently, or their belief that contacting an attending would not change a patient's outcome (Table 4).

Reasons Night Float Housestaff Do Not Contact an Attending Physician
StatementPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 52) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

No defined attending to contact2.97 (1.35)1.96 (0.92)<0.0001
Fear of waking an attending3.26 (1.25)2.72 (1.09)0.03
Fear of revealing knowledge gaps2.26 (1.14)2.25 (0.96)0.95
Would rather make decision on own3.40 (0.93)3.03 (1.06)0.08
Will not change patient outcome3.26 (1.06)3.21 (1.03)0.81

DISCUSSION

The ACGME's new duty hour regulations require that supervision for first‐year residents be provided by a qualified physician (advanced resident, fellow, or attending physician) who is physically present at the hospital. Our study demonstrates that increased direct overnight supervision provided by an in‐house nocturnist enhanced the clinical value of the night float rotation and the perceived quality of patient care. In our study, increased attending supervision did not reduce perceived decision‐making autonomy, and in fact led to increased rates of attending contact during times of critical clinical decision‐making. Such results may help assuage fears that recent regulations mandating enhanced attending supervision will produce less capable practitioners, and offers reassurance that such changes are positively impacting patient care.

Many academic institutions are implementing nocturnists, although their precise roles and responsibilities are still being defined. Our nocturnist program was explicitly designed with housestaff supervision as a core responsibility, with the goal of improving patient safety and housestaff education overnight. We found that availability barriers to attending contact were logically decreased with in‐house faculty presence. Potentially harmful attitudes, however, around requesting support (such as fear of revealing knowledge gaps or the desire to make decisions independently) remained. Furthermore, despite statistically significant increases in contact between faculty and residents at times of critical decision‐making, overall rates of attending contact for diagnostic and therapeutic interventions remained low. It is unknown from our study or previous research, however, what level of contact is appropriate or ideal for many clinical scenarios.

Additionally, we described a novel role of an academic nocturnist at a tertiary care teaching hospital and offered a potential template for the development of academic nocturnists at similar institutions seeking to increase direct overnight supervision. Such roles have not been previously well defined in the literature. Based on our experience, the nocturnist's role was manageable and well utilized by housestaff, particularly for assistance with critically ill patients and overnight triaging. We believe there are a number of factors associated with the success of this role. First, clear guidelines were presented to housestaff and nocturnists regarding expectations for supervision (for example, staffing ICU admissions within 2 hours). These guidelines likely contributed to the increased attending contact observed during critical clinical decision‐making, as well as the perceived improved patient outcomes by our housestaff. Second, the nocturnists were expected to be an integral part of the overnight care team. In many systems, the nocturnists act completely independently of the housestaff teams, creating an additional barrier to contact and communication. In our system, because of clear guidelines and their integral role in staffing overnight admissions, the nocturnists were an essential partner in care for the housestaff. Third, most of the nocturnists had recently completed their residency training at this institution. Although our survey does not directly address this, we believe their knowledge of the hospital, appreciation of the role of the intern and the resident within our system, and understanding of the need to preserve housestaff autonomy were essential to building a successful nocturnist role. Lastly, the nocturnists were not only expected to supervise and staff new admissions, but were also given a teaching expectation. We believe they were viewed by housestaff as qualified teaching attendings, similar to the daytime hospitalist. These findings may provide guidelines for other institutions seeking to balance overnight hospitalist supervision with preserving resident's ability to make autonomous decisions.

There are several limitations to our study. The findings represent the experience of internal medicine housestaff at a single academic, tertiary care medical center and may not be reflective of other institutions or specialties. We asked housestaff to recall night float experiences from the prior year, which may have introduced recall bias, though responses were obtained before participants underwent the new curriculum. Maturation of housestaff over time could have led to changes in perceived autonomy, value of the night float rotation, and rates of attending contact independent of nocturnist implementation. In addition, there may have been unaccounted changes to other elements of the residency program, hospital, or patient volume between rotations. The implementation of the nocturnist, however, was the only major change to our training program that academic year, and there were no significant changes in patient volume, structure of the teaching or non‐resident services, or other policies around resident supervision.

It is possible that the nocturnist may have contributed to reports of increased clinical value and perceived quality of patient care simply by decreasing overnight workload for housestaff, and enhanced supervision and teaching may have played a lesser role. Even if this were true, optimizing resident workload is in itself an important goal for teaching hospitals and residency programs alike in order to maximize patient safety. Inclusion of intern post‐rotation surveys may have influenced data; though, we had no reason to suspect the surveyed interns would respond in a different manner than prior resident groups. The responses of both junior and senior housestaff were pooled; while this potentially weighted the results in favor of higher responding groups, we felt that it conveyed the residents' accurate sentiments on the program. Finally, while we compared two models of overnight supervision, we reported only housestaff perceptions of education, autonomy, patient outcomes, and supervisory contact, and not direct measures of knowledge or patient care. Further research will be required to define the relationship between supervision practices and patient‐level clinical outcomes.

The new ACGME regulations around resident supervision, as well as the broader movement to improve the safety and quality of care, require residency programs to negotiate a delicate balance between providing high‐quality patient care while preserving graduated independence in clinical training. Our study demonstrates that increased overnight supervision by nocturnists with well‐defined supervisory and teaching roles can preserve housestaff autonomy, improve the clinical experience for trainees, increase access to support during times of critical decision‐making, and potentially lead to improved patient outcomes.

Acknowledgements

Disclosures: No authors received commercial support for the submitted work. Dr Arora reports being an editorial board member for Agency for Healthcare Research and Quality (AHRQ) Web M&M, receiving grants from the ACGME for previous work, and receiving payment for speaking on graduate medical education supervision.

Postgraduate medical education has traditionally relied on a training model of progressive independence, where housestaff learn patient care through increasing autonomy and decreasing levels of supervision.1 While this framework has little empirical backing, it is grounded in sound educational theory from similar disciplines and endorsed by medical associations.1, 2 The Accreditation Council for Graduate Medical Education (ACGME) recently implemented regulations requiring that first‐year residents have a qualified supervisor physically present or immediately available at all times.3 Previously, oversight by an offsite supervisor (for example, an attending physician at home) was considered adequate. These new regulations, although motivated by patient safety imperatives,4 have elicited concerns that increased supervision may lead to decreased housestaff autonomy and an increased reliance on supervisors for clinical guidance.5 Such changes could ultimately produce less qualified practitioners by the completion of training.

Critics of the current training model point to a patient safety mechanism where housestaff must take responsibility for requesting attending‐level help when situations arise that surpass their skill level.5 For resident physicians, however, the decision to request support is often complex and dependent not only on the clinical question, but also on unique and variable trainee and supervisor factors.6 Survey data from 1999, prior to the current training regulations, showed that increased faculty presence improved resident reports of educational value, quality of patient care, and autonomy.7 A recent survey, performed after the initiation of overnight attending supervision at an academic medical center, demonstrated perceived improvements in educational value and patient‐level outcomes by both faculty and housestaff.8 Whether increased supervision and resident autonomy can coexist remains undetermined.

Overnight rotations for residents (commonly referred to as night float) are often times of little direct or indirect supervision. A recent systematic review of clinical supervision practices for housestaff in all fields found scarce literature on overnight supervision practices.9 There remains limited and conflicting data regarding the quality of patient care provided by the resident night float,10 as well as evidence revealing a low perceived educational value of night rotations when compared with non‐night float rotations.11 Yet in 2006, more than three‐quarters of all internal medicine programs employed night float rotations.12 In response to ACGME guidelines mandating decreased shift lengths with continued restrictions on overall duty hours, it appears likely even more training programs will implement night float systems.

The presence of overnight hospitalists (also known as nocturnists) is growing within the academic setting, yet their role in relation to trainees is either poorly defined13 or independent of housestaff.14 To better understand the impact of increasing levels of supervision on residency training, we investigated housestaff perceptions of education, autonomy, and clinical decision‐making before and after implementation of an in‐hospital, overnight attending physician (nocturnist).

METHODS

The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 170 housestaff. At our institution, all first year residents perform a week of intern night float consisting of overnight cross‐coverage of general medicine patients on the floor, step‐down, and intensive care units (ICUs). Second and third year residents each complete 4 to 6 days of resident night float each year at this hospital. They are responsible for assisting the intern night float with cross‐coverage, in addition to admitting general medicine patients to the floor, step‐down unit, and intensive care units. Every night at our medical center, 1 intern night float and 1 resident night float are on duty in the hospital; this is in addition to a resident from the on‐call medicine team and a resident working in the ICU. Prior to July 2010, no internal medicine attending physicians were physically present in the hospital at night. Oversight for the intern and resident night float was provided by the attending physician for the on‐call resident ward team, who was at home and available by pager. The night float housestaff were instructed to contact the responsible attending physician only when a major change in clinical status occurred for hospitalized or newly admitted patients, though this expectation was neither standardized nor monitored.

We established a nocturnist program at the start of the 2010 academic year. The position was staffed by hospitalists from within the Division of Hospital Medicine without the use of moonlighters. Two‐thirds of shifts were filled by 3 dedicated nocturnists with remaining staffing provided by junior hospitalist faculty. The dedicated nocturnists had recently completed their internal medicine residency at our institution. Shift length was 12 hours and dedicated nocturnists worked, on average, 10 shifts per month. The nocturnist filled a critical overnight safety role through mandatory bedside staffing of newly admitted ICU patients within 2 hours of admission, discussion in person or via telephone of newly admitted step‐down unit patients within 6 hours of admission, and direct or indirect supervision of the care of any patients undergoing a major change in clinical status. The overnight hospitalist was also available for clinical questions and to assist housestaff with triaging of overnight admissions. After nocturnist implementation, overnight housestaff received direct supervision or had immediate access to direct supervision, while prior to the nocturnist, residents had access only to indirect supervision.

In addition, the nocturnist admitted medicine patients after 1 AM in a 1:1 ratio with the admitting night float resident, performed medical consults, and provided coverage of non‐teaching medicine services. While actual volume numbers were not obtained, the estimated average of resident admissions per night was 2 to 3, and the number of nocturnist admissions was 1 to 2. The nocturnist also met nightly with night float housestaff for half‐hour didactics focusing on the management of common overnight clinical scenarios. The role of the new nocturnist was described to all housestaff in orientation materials given prior to their night float rotation and their general medicine ward rotation.

We administered pre‐rolling surveys and post‐rolling surveys of internal medicine intern and resident physicians who underwent the night float rotation at our hospital during the 2010 to 2011 academic year. Surveys examined housestaff perceptions of the night float rotation with regard to supervisory roles, educational and clinical value, and clinical decision‐making prior to and after implementation of the nocturnist. Surveys were designed by the study investigators based on prior literature,1, 510 personal experience, and housestaff suggestion, and were refined during works‐in‐progress meetings. Surveys were composed of Likert‐style questions asking housestaff to rate their level of agreement (15, strongly disagree to strongly agree) with statements regarding the supervisory and educational experience of the night float rotation, and to judge their frequency of contact (15, never to always/nightly) with an attending physician for specific clinical scenarios. The clinical scenarios described situations dealing with attendingresident communication around transfers of care, diagnostic evaluation, therapeutic interventions, and adverse events. Scenarios were taken from previous literature describing supervision preferences of faculty and residents during times of critical clinical decision‐making.15

One week prior to the beginning their night float rotation for the 20102011 academic year, housestaff were sent an e‐mail request to complete an online survey asking about their night float rotation during the prior academic year, when no nocturnist was present. One week after completion of their night float rotation for the 20102011 academic year, housestaff received an e‐mail with a link to a post‐survey asking about their recently completed, nocturnist‐supervised, night float rotation. First year residents received only a post‐survey at the completion of their night float rotation, as they would be unable to reflect on prior experience.

Informed consent was imbedded within the e‐mail survey request. Survey requests were sent by a fellow within the Division of Hospital Medicine with a brief message cosigned by an associate program director of the residency program. We did not collect unique identifiers from respondents in order to offer additional assurances to the participants that the survey was anonymous. There was no incentive offered for completion of the survey. Survey data were anonymous and downloaded to a database by a third party. Data were analyzed using Microsoft Excel, and pre‐responses and post‐responses compared using a Student t test. The study was approved by the medical center's Institutional Review Board.

RESULTS

Rates of response for pre‐surveys and post‐surveys were 57% (43 respondents) and 51% (53 respondents), respectively. Due to response rates and in order to convey accurately the perceptions of the training program as a whole, we collapsed responses of the pre‐surveys and post‐surveys based on level of training. After implementation of the overnight attending, we observed a significant increase in the perceived clinical value of the night float rotation (3.95 vs 4.27, P = 0.01) as well as in the adequacy of overnight supervision (3.65 vs 4.30, P < 0.0001; Table 1). There was no reported change in housestaff decision‐making autonomy (4.35 vs 4.45, P = 0.44). In addition, we noted a nonsignificant trend towards an increased perception of the night float rotation as a valuable educational experience (3.83 vs 4.04, P = 0.24). After implementation of the nocturnist, more resident physicians agreed that overnight supervision by an attending positively impacted patient outcomes (3.79 vs 4.30, P = 0.002).

General Perceptions of the Night Float Rotation
StatementPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Night float is a valuable educational rotation3.83 (0.81)4.04 (0.83)0.24
Night float is a valuable clinical rotation3.95 (0.65)4.27 (0.59)0.01
I have adequate overnight supervision3.65 (0.76)4.30 (0.72)<0.0001
I have sufficient autonomy to make clinical decisions4.35 (0.57)4.45 (0.60)0.44
Overnight supervision by an attending positively impacts patient outcomes3.79 (0.88)4.30 (0.74)0.002

After implementation of the nocturnist, night float providers demonstrated increased rates of contacting an attending physician overnight (Table 2). There were significantly greater rates of attending contact for transfers from outside facilities (2.00 vs 3.20, P = 0.006) and during times of adverse events (2.51 vs 3.25, P = 0.04). We observed a reported increase in attending contact prior to ordering invasive diagnostic procedures (1.75 vs 2.76, P = 0.004) and noninvasive diagnostic procedures (1.09 vs 1.31, P = 0.03), as well as prior to initiation of intravenous antibiotics (1.11 vs 1.47, P = 0.007) and vasopressors (1.52 vs 2.40, P = 0.004).

Self‐Reported Incidence of Overnight Attending Contact During Critical Decision‐Making
ScenarioPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 51) Mean (SD)P Value
  • NOTE: Responses are never contact (1) to always contact (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

Receive transfer from outside facility2.00 (1.27)3.20 (1.58)0.006
Prior to ordering noninvasive diagnostic procedure1.09 (0.29)1.31 (0.58)0.03
Prior to ordering an invasive procedure1.75 (0.84)2.76 (1.45)0.004
Prior to initiation of intravenous antibiotics1.11 (0.32)1.47 (0.76)0.007
Prior to initiation of vasopressors1.52 (0.82)2.40 (1.49)0.004
Patient experiencing adverse event, regardless of cause2.51 (1.31)3.25 (1.34)0.04

After initiating the program, the nocturnist became the most commonly contacted overnight provider by the night float housestaff (Table 3). We observed a decrease in peer to peer contact between the night float housestaff and the on‐call overnight resident after implementation of the nocturnist (2.67 vs 2.04, P = 0.006).

Self‐Reported Incidence of Night Float Contact With Overnight Providers for Patient Care
ProviderPre‐Nocturnist (n = 43) Mean (SD)Post‐Nocturnist (n = 53) Mean (SD)P Value
  • NOTE: Responses are never (1) to nightly (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: ICU, intensive care unit; PMD, primary medical doctor; SD, standard deviation.

ICU Fellow1.86 (0.70)1.86 (0.83)0.96
On‐call resident2.67 (0.89)2.04 (0.92)0.006
ICU resident2.14 (0.74)2.04 (0.91)0.56
On‐call medicine attending1.41 (0.79)1.26 (0.52)0.26
Patient's PMD1.27 (0.31)1.15 (0.41)0.31
Referring MD1.32 (0.60)1.15 (0.45)0.11
Nocturnist 3.59 (1.22) 

Attending presence led to increased agreement that there was a defined overnight attending to contact (2.97 vs 1.96, P < 0.0001) and a decreased fear of waking an attending overnight for assistance (3.26 vs 2.72, P = 0.03). Increased attending availability, however, did not change resident physician's fear of revealing knowledge gaps, their desire to make decisions independently, or their belief that contacting an attending would not change a patient's outcome (Table 4).

Reasons Night Float Housestaff Do Not Contact an Attending Physician
StatementPre‐Nocturnist (n = 42) Mean (SD)Post‐Nocturnist (n = 52) Mean (SD)P Value
  • NOTE: Responses are strongly disagree (1) to strongly agree (5). Response rate (n) fluctuates due to item non‐response. Abbreviations: SD, standard deviation.

No defined attending to contact2.97 (1.35)1.96 (0.92)<0.0001
Fear of waking an attending3.26 (1.25)2.72 (1.09)0.03
Fear of revealing knowledge gaps2.26 (1.14)2.25 (0.96)0.95
Would rather make decision on own3.40 (0.93)3.03 (1.06)0.08
Will not change patient outcome3.26 (1.06)3.21 (1.03)0.81

DISCUSSION

The ACGME's new duty hour regulations require that supervision for first‐year residents be provided by a qualified physician (advanced resident, fellow, or attending physician) who is physically present at the hospital. Our study demonstrates that increased direct overnight supervision provided by an in‐house nocturnist enhanced the clinical value of the night float rotation and the perceived quality of patient care. In our study, increased attending supervision did not reduce perceived decision‐making autonomy, and in fact led to increased rates of attending contact during times of critical clinical decision‐making. Such results may help assuage fears that recent regulations mandating enhanced attending supervision will produce less capable practitioners, and offers reassurance that such changes are positively impacting patient care.

Many academic institutions are implementing nocturnists, although their precise roles and responsibilities are still being defined. Our nocturnist program was explicitly designed with housestaff supervision as a core responsibility, with the goal of improving patient safety and housestaff education overnight. We found that availability barriers to attending contact were logically decreased with in‐house faculty presence. Potentially harmful attitudes, however, around requesting support (such as fear of revealing knowledge gaps or the desire to make decisions independently) remained. Furthermore, despite statistically significant increases in contact between faculty and residents at times of critical decision‐making, overall rates of attending contact for diagnostic and therapeutic interventions remained low. It is unknown from our study or previous research, however, what level of contact is appropriate or ideal for many clinical scenarios.

Additionally, we described a novel role of an academic nocturnist at a tertiary care teaching hospital and offered a potential template for the development of academic nocturnists at similar institutions seeking to increase direct overnight supervision. Such roles have not been previously well defined in the literature. Based on our experience, the nocturnist's role was manageable and well utilized by housestaff, particularly for assistance with critically ill patients and overnight triaging. We believe there are a number of factors associated with the success of this role. First, clear guidelines were presented to housestaff and nocturnists regarding expectations for supervision (for example, staffing ICU admissions within 2 hours). These guidelines likely contributed to the increased attending contact observed during critical clinical decision‐making, as well as the perceived improved patient outcomes by our housestaff. Second, the nocturnists were expected to be an integral part of the overnight care team. In many systems, the nocturnists act completely independently of the housestaff teams, creating an additional barrier to contact and communication. In our system, because of clear guidelines and their integral role in staffing overnight admissions, the nocturnists were an essential partner in care for the housestaff. Third, most of the nocturnists had recently completed their residency training at this institution. Although our survey does not directly address this, we believe their knowledge of the hospital, appreciation of the role of the intern and the resident within our system, and understanding of the need to preserve housestaff autonomy were essential to building a successful nocturnist role. Lastly, the nocturnists were not only expected to supervise and staff new admissions, but were also given a teaching expectation. We believe they were viewed by housestaff as qualified teaching attendings, similar to the daytime hospitalist. These findings may provide guidelines for other institutions seeking to balance overnight hospitalist supervision with preserving resident's ability to make autonomous decisions.

There are several limitations to our study. The findings represent the experience of internal medicine housestaff at a single academic, tertiary care medical center and may not be reflective of other institutions or specialties. We asked housestaff to recall night float experiences from the prior year, which may have introduced recall bias, though responses were obtained before participants underwent the new curriculum. Maturation of housestaff over time could have led to changes in perceived autonomy, value of the night float rotation, and rates of attending contact independent of nocturnist implementation. In addition, there may have been unaccounted changes to other elements of the residency program, hospital, or patient volume between rotations. The implementation of the nocturnist, however, was the only major change to our training program that academic year, and there were no significant changes in patient volume, structure of the teaching or non‐resident services, or other policies around resident supervision.

It is possible that the nocturnist may have contributed to reports of increased clinical value and perceived quality of patient care simply by decreasing overnight workload for housestaff, and enhanced supervision and teaching may have played a lesser role. Even if this were true, optimizing resident workload is in itself an important goal for teaching hospitals and residency programs alike in order to maximize patient safety. Inclusion of intern post‐rotation surveys may have influenced data; though, we had no reason to suspect the surveyed interns would respond in a different manner than prior resident groups. The responses of both junior and senior housestaff were pooled; while this potentially weighted the results in favor of higher responding groups, we felt that it conveyed the residents' accurate sentiments on the program. Finally, while we compared two models of overnight supervision, we reported only housestaff perceptions of education, autonomy, patient outcomes, and supervisory contact, and not direct measures of knowledge or patient care. Further research will be required to define the relationship between supervision practices and patient‐level clinical outcomes.

The new ACGME regulations around resident supervision, as well as the broader movement to improve the safety and quality of care, require residency programs to negotiate a delicate balance between providing high‐quality patient care while preserving graduated independence in clinical training. Our study demonstrates that increased overnight supervision by nocturnists with well‐defined supervisory and teaching roles can preserve housestaff autonomy, improve the clinical experience for trainees, increase access to support during times of critical decision‐making, and potentially lead to improved patient outcomes.

Acknowledgements

Disclosures: No authors received commercial support for the submitted work. Dr Arora reports being an editorial board member for Agency for Healthcare Research and Quality (AHRQ) Web M&M, receiving grants from the ACGME for previous work, and receiving payment for speaking on graduate medical education supervision.

References
  1. Kennedy TJ,Regehr G,Baker GR,Lingard LA.Progressive independence in clinical training: a tradition worth defending?Acad Med.2005;80(10 suppl):S106S111.
  2. Joint Committee of the Group on Resident Affairs and Organization of Resident Representatives.Patient Safety and Graduate Medical Education.Washington, DC:Association of American Medical Colleges; February2003:6.
  3. Accreditation Council on Graduate Medical Education.Common Program Requirements. Available at: http://www.acgme.org/acWebsite/home/Common_Program_Requirements_07012011.pdf. Accessed October 16,2011.
  4. The IOM medical errors report: 5 years later, the journey continues.Qual Lett Health Lead.2005;17(1):210.
  5. Bush RW.Supervision in medical education: logical fallacies and clear choices.J Grad Med Educ.2010;2(1):141143.
  6. Kennedy TJ,Regehr G,Baker GR,Lingard L.Preserving professional credibility: grounded theory study of medical trainees' requests for clinical support.BMJ.2009;338:b128.
  7. Phy MP,Offord KP,Manning DM,Bundrick JB,Huddleston JM.Increased faculty presence on inpatient teaching services.Mayo Clin Proc.2004;79(3):332336.
  8. Trowbridge RL,Almeder L,Jacquet M,Fairfield KM.The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service.J Grad Med Educ.2010;2(1):5356.
  9. Farnan JM,Petty LA,Georgitis E, et al.A systematic review: the effect of clinical supervision on patient and residency education outcomes.Acad Med.2012;87(4):428442.
  10. Jasti H,Hanusa BH,Switzer GE,Granieri R,Elnicki M.Residents' perceptions of a night float system.BMC Med Educ.2009;9:52.
  11. Luks AM,Smith CS,Robins L,Wipf JE.Resident perceptions of the educational value of night float rotations.Teach Learn Med.2010;22(3):196201.
  12. Wallach SL,Alam K,Diaz N,Shine D.How do internal medicine residency programs evaluate their resident float experiences?South Med J.2006;99(9):919923.
  13. Beasley BW,McBride J,McDonald FS.Hospitalist involvement in internal medicine residencies.J Hosp Med.2009;4(8):471475.
  14. Ogden PE,Sibbitt S,Howell M, et al.Complying with ACGME resident duty hour restrictions: restructuring the 80 hour workweek to enhance education and patient safety at Texas A81(12):10261031.
  15. Farnan JM,Johnson JK,Meltzer DO,Humphrey HJ,Arora VM.On‐call supervision and resident autonomy: from micromanager to absentee attending.Am J Med.2009;122(8):784788.
References
  1. Kennedy TJ,Regehr G,Baker GR,Lingard LA.Progressive independence in clinical training: a tradition worth defending?Acad Med.2005;80(10 suppl):S106S111.
  2. Joint Committee of the Group on Resident Affairs and Organization of Resident Representatives.Patient Safety and Graduate Medical Education.Washington, DC:Association of American Medical Colleges; February2003:6.
  3. Accreditation Council on Graduate Medical Education.Common Program Requirements. Available at: http://www.acgme.org/acWebsite/home/Common_Program_Requirements_07012011.pdf. Accessed October 16,2011.
  4. The IOM medical errors report: 5 years later, the journey continues.Qual Lett Health Lead.2005;17(1):210.
  5. Bush RW.Supervision in medical education: logical fallacies and clear choices.J Grad Med Educ.2010;2(1):141143.
  6. Kennedy TJ,Regehr G,Baker GR,Lingard L.Preserving professional credibility: grounded theory study of medical trainees' requests for clinical support.BMJ.2009;338:b128.
  7. Phy MP,Offord KP,Manning DM,Bundrick JB,Huddleston JM.Increased faculty presence on inpatient teaching services.Mayo Clin Proc.2004;79(3):332336.
  8. Trowbridge RL,Almeder L,Jacquet M,Fairfield KM.The effect of overnight in‐house attending coverage on perceptions of care and education on a general medical service.J Grad Med Educ.2010;2(1):5356.
  9. Farnan JM,Petty LA,Georgitis E, et al.A systematic review: the effect of clinical supervision on patient and residency education outcomes.Acad Med.2012;87(4):428442.
  10. Jasti H,Hanusa BH,Switzer GE,Granieri R,Elnicki M.Residents' perceptions of a night float system.BMC Med Educ.2009;9:52.
  11. Luks AM,Smith CS,Robins L,Wipf JE.Resident perceptions of the educational value of night float rotations.Teach Learn Med.2010;22(3):196201.
  12. Wallach SL,Alam K,Diaz N,Shine D.How do internal medicine residency programs evaluate their resident float experiences?South Med J.2006;99(9):919923.
  13. Beasley BW,McBride J,McDonald FS.Hospitalist involvement in internal medicine residencies.J Hosp Med.2009;4(8):471475.
  14. Ogden PE,Sibbitt S,Howell M, et al.Complying with ACGME resident duty hour restrictions: restructuring the 80 hour workweek to enhance education and patient safety at Texas A81(12):10261031.
  15. Farnan JM,Johnson JK,Meltzer DO,Humphrey HJ,Arora VM.On‐call supervision and resident autonomy: from micromanager to absentee attending.Am J Med.2009;122(8):784788.
Issue
Journal of Hospital Medicine - 7(8)
Issue
Journal of Hospital Medicine - 7(8)
Page Number
606-610
Page Number
606-610
Publications
Publications
Article Type
Display Headline
Effects of increased overnight supervision on resident education, decision‐making, and autonomy
Display Headline
Effects of increased overnight supervision on resident education, decision‐making, and autonomy
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Division of Hospital Medicine, San Francisco General Hospital, Department of Medicine, University of California San Francisco, 1001 Potrero Ave, Room 5H‐4, San Francisco, CA 94110
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Hospitalist Attendings: Systematic Review

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Effect of hospitalist attending physicians on trainee educational experiences: A systematic review

Wachter and Goldman1 described the hospitalist model for inpatient care more than a decade ago. The Society of Hospital Medicine (SHM) defines hospitalists as physicians whose primary professional focus is the general medical care of hospitalized patients. Their activities include patient care, teaching, research, and leadership related to hospital medicine.2 This care delivery model has enjoyed exponential growth, with approximately 20,000 hospitalists in the United States, and an estimated 30,000 by the end of the decade.35 Currently, 29% of hospitals, including 55% with at least 200 beds, employ hospitalists to coordinate inpatient care.6 Data suggests that hospitalists promote cost containment and decrease length of stay without negatively affecting rates of death, readmission, or patient satisfaction.715

In academic settings, hospitalists also provide a substantial amount of teaching to trainees,1618 and the hospitalist model represents a fundamental change in inpatient education delivery. Traditional ward attendings typically consisted of a heterogeneous group of subspecialists, laboratory‐based clinician scientists, and general internists, many of whom attended and taught relatively infrequently. By virtue of focusing purely on inpatient care, hospitalists are more intimately involved with inpatient care systems, as well as teaching challenges (and opportunities) in the inpatient setting. The theoretical educational benefits of hospitalists include greater availability, more expertise in hospital medicine, and more emphasis on cost‐effective care.7, 18, 19 Concerns that trainees would have diminished autonomy and less exposure to subspecialist care have not been borne out.16, 20, 21

The purpose of this study was to examine the role of hospitalists on inpatient trainee education. We systematically reviewed the literature to determine the impact of hospitalists compared to nonhospitalist attendings on medical students' and residents' education.

MATERIALS AND METHODS

Data Sources

We searched the MEDLINE, Database of Reviews of Effectiveness (DARE), National Health Service (NHS) Economic Evaluation Database (EED), Health Technology Assessment (HTA), and Cochrane Collaboration databases for citations using the term hospitalist through November 2007, and updated the literature search through October 1, 2008. Additionally, we manually searched the bibliographies of relevant retrieved articles and national meeting abstracts from the SHM (2002‐2007), Society of General Internal Medicine (SGIM) (2001‐2007), and Pediatric Academic Societies (PAS) (2000‐2007). The authors of included meeting abstracts were contacted for additional information.

Data Selection

We included English‐language studies that reported the effects of hospitalist attending physicians on the knowledge, skills, or attitudes of medical students or residents in an inpatient setting, and compared these outcomes to a comparison group of trainees taught by nonhospitalist attending physicians. We excluded opinion articles, review articles, descriptions of curricula, surveys of program leaders, and evaluations of teaching without trainee assessments.

Data Extraction

We developed a standardized data extraction form based on the Best Evidence Medical Education (BEME) Collaboration protocol.22 The following information was extracted from each article: study design and measurement scale; attending and trainee information; study setting; response rate, if available; outcomes measuring attending physician's teaching ability; and outcomes assessing trainees' attitudes, knowledge, and skills. Open‐ended items solicited overall impression, concerns, new insights, and avenues for research not already captured in the data extraction form. A meta‐analysis was not performed due to varying measures for teacher assessments.

One investigator (P.N.) performed the literature search and a second investigator (K.E.H.) reviewed and confirmed the appropriateness of the articles retained and excluded based on review of the titles and abstracts. Next, 3 investigators (P.N., K.E.H., S.R.) confirmed that all the included articles met inclusion criteria. All 3 independently abstracted each article and coded the strength of findings and methodological quality based on: (1) response rate: (2) number of trainees and attendings; (3) control for additional education interventions; (4) explicit indication of random allocation of trainees to attendings; and (5) presence of a contemporaneous comparison group of nonhospitalist attendings. The level of behavioral impact by the 4‐level Kirkpatrick hierarchy was also recorded for each study to assess the strength of the intervention.23 The strength of data was rated for each study on a scale of 1 to 5, with 1 = no clear conclusions can be drawn; 2 = results ambiguous, but appears to be a trend; 3 = conclusions can probably be based on results; 4 = results are clear and very likely to be true; and 5 = results are unequivocal. Disagreements about search criteria, data extraction, and classification of study results were resolved by consensus.

RESULTS

Search Results

The database searches yielded 711 articles (Figure 1). Based on review of titles and abstracts, 32 articles were retrieved for full‐text review. During full‐text review, we eliminated 26 studies because they had no nonhospitalist control group,7, 16, 18, 2427 were opinion or review articles,19, 21, 2834 examined hospitalists' roles without trainee outcomes,17, 3540 surveyed program administration,41 or did not involve hospitalists.42, 43 Ultimately, 6 citations published between 2002 and 2007 met all inclusion criteria (Table 1).4449 The updated literature search through October 1, 2008 did not yield any additional relevant studies.

Figure 1
Search and selection of included articles.
Summary of Studies
Location, yearreference Learners (n) Number of Attendings Attending Ward Responsibilities (weeks per year) Attending Experience (mean years postgraduation) Attending Gender (% female) Survey Response Rate (%) Data Strength
  • Meeting abstracts.

  • Brigham & Women's Hospital, University of California San Francisco, University of Chicago, University of Washington, University of Illinois, University of New Mexico.

  • Data strength: 1 (no clear conclusions can be drawn), 2 (results ambiguous, but appears to be a trend), 3 (conclusions can probably be based on results), 4 (results are clear and very likely to be true), 5 (results are unequivocal).

University of Chicago, 200244 PGY‐unspecified (86) 2‐4 hospitalists; unknown nonhospitalists 12‐24 hospitalists; 4‐8 nonhospitalists 58 2
Children's Hospital, Boston, 200245 PGY‐1, PGY‐3 (unknown) 8 hospitalists; 75 nonhospitalists 12‐16 hospitalists; 2‐4 nonhospitalists 63 2
Oregon Health & Sciences, 200446 MS3 (138) 6 hospitalists; 11 nonhospitalists 22.8 hospitalists; 6.4 nonhospitalists 4.2 hospitalists; 10.9 nonhospitalists 2/6 (33%) hospitalists; 4/11 (36%) nonhospitalists 72 3
University of California, San Francisco, 200447 MS3‐4, PGY1‐3 (917) 17 hospitalists; 39 general internists; 13 subspecialists 12 hospitalists; 3.24 nonhospitalists 6/17 (35%) hospitalists; 17/52 (33%) nonhospitalists 91 4
Grady Memorial, 200448 MS3‐4, PGY1‐3 (unknown) 12 hospitalists; 27 general internists; 51 subspecialists 24 hospitalists; 6 nonhospitalists 6.1 hospitalists; 9.7 general internists; 21.6 subspecialists 6/12 (50%) hospitalists; 16/51 (31%) nonhospitalists 81 3
Penn State Children's Hospital, 200749 MS3 (67) 2 hospitalists; 8 nonhospitalists 2 MDs covered 32 hospitalists; 8 MDs covered 28 nonhospitalists 1/2 (50%) hospitalists; 2/8 (25%) nonhospitalists 100 3
Multiple sites, 200550* MS3 (294) 54 2
California Pacific Medical Center, 200651* PGY‐unspecified (unknown) 1

Examination of meeting abstracts yielded a total of 7,062 abstracts (Figure 2), of which 9 abstracts were retrieved for full‐text review. Two abstracts met inclusion criteria (Table 1).50, 51 Excluded meeting abstracts included published studies that were already abstracted as manuscripts,52, 53 had no nonhospitalist control group,54, 55 did not involve hospitalists,56 surveyed program administrators,57 or examined hospitalists' roles without trainee outcomes.58 Our communications with abstract authors did not yield any relevant additional information.

Figure 2
Search and selection of included meeting abstracts.

Study Settings, Designs, and Outcomes

Six of 8 included studies occurred in an internal medicine inpatient setting: 4 in university hospitals,44, 46, 47, 50 1 in a public safety‐net hospital,48 and 1 in a community teaching hospital.51 The remaining 2 studied the inpatient pediatric wards in university hospitals.45, 49

In 7 of 8 included studies, trainees were assigned to work with hospitalists or nonhospitalists according to the study site's standard method for allocating trainees to rotations; trainees were not allowed to choose their supervising attending. We considered these studies to be quasirandomized. The other study compared nonhospitalist attending evaluations the year prior to implementing hospitalists to hospitalist attending evaluations the year afterward.45

Studies measured trainee attitudes through routinely administered evaluations,46, 47, 49, 51 dedicated surveys,44, 48, 50 or both.45 One also qualitatively coded trainees' written responses to determine themes.48

Characteristics of Learners

Studies assessed only residents,44, 45, 51 only third‐year medical students,46, 49, 50 or residents and third‐year and fourth‐year medical students.47, 48 The amount of time trainees spent with each attending physician ranged from 2 to 4 weeks. One‐half of the studies reported the number of trainees responding to surveys in each attending group. Two studies had an equivalent number of trainees respond for each attending group,47, 49 while the other 2 had approximately twice as many trainees working with hospitalists respond.46, 50 No studies reported other characteristics of trainees assigned to the different attending groups.

Characteristics of Attendings

Hospitalists were described as attending between 12 and 32 weeks per year while nonhospitalists worked 2 to 12 weeks, except in 1 study where nonhospitalists worked 28 weeks (Table 1).49 Two studies separated nonhospitalists into general internists and subspecialists47, 48 but only 1 contrasted the weeks on service for the 2 groups of nonhospitalists.48 On average, hospitalists tended to be younger and have less experience than nonhospitalist attendings (Table 1). In those reporting attending gender, there was no significant difference between the 2 attending groups.

Methodological Quality

Because all of the included studies only evaluated trainee attitudes, they were all coded as Level 1 by the Kirkpatrick hierarchy for covering learners' views on the learning experience, its organization, presentation, content, teaching methods, and aspects of the instructional organization, materials, quality of instruction.23

The methodological quality of the studies varied. Seven studies used a contemporaneous control group, and 145 employed a noncontemporaneous comparison of hospitalists to nonhospitalists. Seven included studies reported the trainee response rate, which varied widely (from 54% to 100%) (Table 1). None of the studies reported whether any other educational interventions that could have biased study results were implemented during the study period. Of the 6 published studies, the strength of the data for 5 studies was rated as a 2 or 3 and for 1 the strength was rated a 4 (Table 1).

Trainee Evaluations Comparing Hospitalists to All Nonhospitalists

The most commonly evaluated attending measures included trainees' overall satisfaction with attendings (n = 8 studies),4451 trainees' ratings of teaching effectiveness (n = 5 studies),44, 46, 47, 49, 50 attending effectiveness of feedback delivery (n = 4 studies),4548 trainees' perceptions of attending knowledge (n = 3 studies),45, 47, 48 and attending involvement of trainees in patient care decisions (n = 3 studies) (Table 2).44, 45, 47 Several other outcomes were reported in 2 or fewer studies (Table 3). All studies reported nonnormally distributed evaluation ratings, with trainee ratings of all attending groups skewed toward high ratings.

Trainee Ratings of Attending Teaching
Number of Studies Evaluated Hospitalists Better Nonhospitalists Better No Difference
  • NOTE: Studies that achieved statistical significant in demonstrating increased trainee satisfaction for each domain are listed in each attending group's column.

  • Hospitalists compared to subspecialists.

  • Hospitalists compared to general internists.

Overall rating of attending 8 44‐46, 47*, 48‐51 47
Teaching effectiveness 5 44, 48‐50 46
Feedback delivery 4 45, 47*, 48 47 46
Involvement of trainees in patient care decisions 3 45, 48 44
Quality of ward rounds 2 44, 49
Effectiveness as a role model 2 45, 48
Communication of rotation goals 1 46
Emphasizes evidence‐based care 1 48
Emphasizes cost‐effective care 1 47
Availability 2 45 48
Perceived knowledge 3 45, 48 47
Bedside teaching 1 45
Apparent interest in psychosocial aspects of care 1 47* 47
Results of Studies Evaluating Hospitalists vs. Nonhospitalists
Reference Citation, Location, Year Study Design Major Findings Data Strength
  • Meeting abstracts.

  • Brigham & Womens Hospitals University of California‐San Fransisco, University of Chicago, University of Washington, University of Illinois, University of New Mexico.

  • NOTE: Shows the individual study results for outcomes measured in 3 or more studies.

  • Abbreviations: CI, confidence interval, MS, medical student; PGC, postgraduate year; SD, standard deviation.

Chung et al.,44 University of Chicago, 2002 Retrospective, quasirandomized with contemporaneous controls % of Internal Medicine house staff very satisfied with Internal Medicine attendings (5‐point scale, 5 = very satisfied): End of month: hospitalist 58%, nonhospitalist 39%; end of year: hospitalists 76%, nonhospitalists 48%. Compared to residents who did not work with hospitalists, residents with experience with hospitalists had fewer concerns about loss of autonomy (8% vs. 41%, P = 0.02), and no difference in concerns about exposure to different faculty (41% vs. 60%, P = 0.08) 2
Landrigan et al.,45 Children's Hospital, Boston, 2002 Retrospective, single group with historical control Overall satisfaction with inpatient experience (4‐point scale, 4 = extremely satisfied): interns, 3.5 with hospitalists, 3.2 with nonhospitalists. PGY3, 3.5 with hospitalists, 3.5 with nonhospitalists. Rating of teaching effectiveness (5‐point scale, 5 = excellent): hospitalists 4.7, nonhospitalists 4.4. PGY3s reported less ability to make decisions independently, less ability to supervise with hospitalist attendings, but differences did not meet statistical significance (P = 0.07). 2
Hunter et al.,46 Oregon Health & Sciences, 2004 Retrospective, quasirandomized with contemporaneous controls MS3 combined overall rating of attending during Internal Medicine clerkship (9‐point scale, 9 = outstanding): hospitalists 8.56, nonhospitalists 8.22. Combined rating was a composite of 7 parameters (communication of rotation goals, establishing learning climate, use of educational time, teaching style, evaluation and feedback, contribution to growth and development, and effectiveness as clinical teacher). 3
Hauer et al.,47 University of California, San Francisco, 2004 Retrospective, quasirandomized with contemporaneous controls Internal medicine house staff, MS4 and MS3 overall satisfaction with Internal Medicine attending (9‐point scale, 9 = excellent): hospitalists 8.3 (SD 0.9), nonhospitalist general internists 7.9 (SD 1.3), subspecialists 8.1 (SD 1.7); P = 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.20 for comparison of hospitalists vs. subspecialists. Attending teaching effectiveness (5‐point scale, 5 = excellent): hospitalists 4.8 (SD 0.6), general internists 4.5 (SD 0.8), specialists 4.5 (SD 1.1); P < 0.001 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.03 for comparison of hospitalists vs. subspecialists. Attending knowledge (9‐point scale): hospitalists 8.2 (SD 1.1), nonhospitalists 7.9 (SD 1.2), subspecialists 8.1 (SD 1.5); P < 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.10 for comparison of hospitalists vs. subspecialists. Attending valuation of trainee opinions (9‐point scale): hospitalists 8.3 (SD 0.9), nonhospitalist generalists 8.2 (SD 1.3), subspecialists 8.1 (SD 1.7); P = 0.20 for comparison of hospitalists vs. nonhospitalist generalists; P = 0.60 for comparison of hospitalist vs. subspecialists. Provision of feedback (9‐point scale): hospitalists 7.9 (SD 1.6), nonhospitalist generalists 7.2 (SD 2.3), subspecialists 7.0 (SD 2.5); P < 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.01 for comparison of hospitalists vs. subspecialists. 4
Kripalani et al.,48 Grady Memorial, 2004 Retrospective, quasirandomized with contemporaneous controls Internal medicine house staff, MS4 and MS3 satisfaction with Internal Medicine attending teaching effectiveness (25‐item McGill Clinical Tutor Evaluation, maximum score 150): hospitalists 134.5 (95% CI, 130.2‐138.8), general internists 135.0 (95% CI, 131.2‐138.8), specialists 126.3 (95% CI, 120.4‐132.1). 3
Geskey and Kees‐Folts,49 Penn State Children's Hospital, 2007 Retrospective, quasirandomized with contemporaneous controls MS3 overall satisfaction with Pediatric attending teaching (4‐point scale, 4 = excellent), hospitalists 3.9, nonhospitalists 3.0. MS3s rated hospitalists higher than nonhospitalists in all 4 attending characteristics measured: teaching effectiveness, effectiveness as a pediatrician, student advocacy effectiveness, and overall. 3
Arora et al.,50 Multiple sites, 2005*, Retrospective, quasirandomized with contemporaneous controls MS3 overall satisfaction with Internal Medicine clerkship (5‐point scale, 5 = very satisfied): hospitalists 4.5, nonhospitalists 4.3. Trends toward greater emphasis on education (P = 0.07) and higher quality attending rounds (P = 0.07) with hospitalists. Effects of hospitalists on resident perceptions of autonomy not reported. 2
Chintharajah and Aronowitz,51 California Pacific Medical Center, 2006* Retrospective, with contemporaneous controls. Method of assignment to attending type not stated. Internal Medicine house staff ratings of Internal Medicine attendings: Using a 9‐point scale in 1998‐2002, then 5‐point scale in 2003‐2005, Hospitalists were rated higher than nonhospitalists in all areas assessed in 1998‐2002, but were rated higher in only 3 areas in 2003‐2005 (accessibility, feedback, and teaching procedures.) Data not shown. 1

Of the 8 studies comparing hospitalists to all nonhospitalists, trainees were statistically significantly more satisfied with hospitalists in all but 1 (Table 3).4451 Hospitalists' overall teaching effectiveness was rated significantly higher in 4 studies,44, 47, 49, 50 but 1 did not demonstrate a difference.46 Hospitalists were also rated higher at feedback delivery compared to all nonhospitalists, with 2 studies45, 47 and 1 abstract reporting hospitalists' superiority. One other study showed increased satisfaction with hospitalists' feedback only compared to subspecialists.48 Hospitalists were perceived as being more knowledgeable and allowing greater trainee involvement in patient care decisions, in 2 of 3 studies addressing each of these questions. In order to evaluate preconceived notions, 1 study demonstrated that residents who never worked with hospitalists were significantly more concerned about hospitalists negatively impacting their clinical autonomy than residents who had worked with hospitalists at least once.44

Hospitalists were rated as more available in 1 study45 with a trend toward more availability in another.47 Trainee satisfaction was higher with hospitalists on other measures including quality of ward rounds,44, 49 effectiveness as a role model,45, 48 communication of rotations' goals,46 emphasis on evidence‐based medicine,48 and emphasis on cost‐effective care.47 In 1 study, trainees were significantly more satisfied with the bedside teaching of nonhospitalists.45 In another, trainees felt that, compared to hospitalists, general internists seemed to be more interested in the psychosocial aspects of patients' care.48

Trainee Evaluations Comparing Hospitalists to Outpatient Generalists and Subspecialists

Of the studies that examined whether the type of nonhospitalist (general internist vs. subspecialist) impacted trainee ratings, 1 showed that trainees were equally satisfied with hospitalists and general internists but that general internists were rated higher than hospitalists for feedback delivery.48 Hospitalists were rated significantly higher than subspecialists overall and for feedback delivery.48 The other study that subclassified nonhospitalists into general internists and subspecialists showed that hospitalists were more highly rated than both general internists and subspecialists overall and for teaching effectiveness and feedback delivery.47

DISCUSSION

This systematic review of the literature describing hospitalists as educators shows that trainees are generally more satisfied with hospitalists than nonhospitalists on their inpatient rotations. Hospitalists were rated more highly than traditional ward attendings overall, and for teaching effectiveness44, 47, 49, 50 and feedback delivery.45, 47 Limited data (3 studies each) indicates that trainees perceive hospitalists as being at least as knowledgeable as traditional attendings, and encouraging similar levels of trainee involvement in patient care decisions. Trainees may be more satisfied with hospitalists than with general internists or subspecialists, although some comparisons have shown that general internists may be preferred. No studies have evaluated the impact of hospitalists on trainee outcomes beyond satisfaction, such as knowledge acquisition, rotation grades, or clinical performance.

Our review suggests that, with increased time spent on the wards, hospitalists exhibit attributes consistent with specialization in inpatient care.1, 14 Hospitalists were noted to emphasize cost‐effectiveness47 and evidence‐based medicine48 and to conduct higher‐quality ward rounds.44, 49 Hospitalists are uniquely qualified to teach about inpatient goals and processes such as decreasing length of stay in the hospital and cost‐effective care.1, 3, 7, 12, 15 Trainees see hospitalists as role models,45, 47 and the site‐defined nature of hospital medicine promotes trainees' access to hospitalist attendings. Such accessibility has been described as an independent attribute of excellent physician role models,59, 60, 62 Our findings from our methodologically rigorous systematic review of the literature extend the conclusions of a narrative review of the literature on hospitalists as educators that also identified favorable ratings of hospitalists, with some unresolved concerns about resident autonomy and the role of subspecialist teachers in hospitalist systems.63

Diminished trainee autonomy was an early concern about hospitalists in academic medical centers.16, 20, 21 In the earliest study we identified that assessed autonomy, trainees perceived similar amounts of autonomy with hospitalists compared to nonhospitalists.44 Interestingly, house staff in more experienced hospitalist models even described experiencing increased involvement in patient care when supervised by hospitalist attendings in both the pediatric and internal medicine settings.45, 47 Hospitalists might also generate more clinical diversity for house staff by reducing length of stay and thereby enhancing opportunities for learning with newly admitted patients.13, 14, 64

The studies that did not demonstrate increased satisfaction with hospitalists may be instructive as well. One negative study46 reported results from a program that instituted the hospitalist model in response to declining trainee satisfaction. With an emphasis on improving the educational experience, nonhospitalist physicians who were already rated highly as teachers were also selected to attend on the wards. Nonetheless, trainees still were more satisfied with hospitalists overall. One study showed that hospitalists were rated more highly than subspecialists when delivering feedback but less so than general internists.47 The authors suggest that their general internists may have been at a more optimum demographic by being a few more years out of training; such correlations of age and rank to evaluations have not been previously described.60, 61

The disadvantages of hospitalists in trainee education identified by this systematic review include the quality of bedside teaching in one study45 and interest in psychosocial aspects of care in another48 compared to general internists. The decline in satisfaction with bedside teaching is a concern but the comparison was noncontemporaneous and the authors explained that the team size increased and resulted in an overall decrease in time at the bedside.45 The concern that decreased patient length of stays may translate to less time spent with patients and less bedside teaching is not new.18 Although hospitalists have shown particular educational advantages, the balance of clinical efficiency and education remains challenging. Trainees' perception that hospitalists were less interested in the psychosocial aspects of care compared to general internists48 was also anticipated when inpatient attending models began to shift, because hospitalization may now be viewed by trainees as discontinuous from a patient's outpatient care and social situation.18 Nevertheless, hospitalists have been able to achieve such quality measures as decreased length of stay without decreasing patient satisfaction.10, 12

Our study has several limitations. First, all attendings were rated highly in all studies. These high ratings are commonly seen with educational evaluations,65 and this phenomenon creates a ceiling effect that limits variability within the group. Nevertheless, trainees rated hospitalists significantly higher than nonhospitalists overall in all of the included studies. The impact of these small but significant differences on trainees' learning and future clinical performance is unknown. Additionally, the distinction between hospitalists and nonhospitalists was not universal. Initially, it was proposed that academic hospitalists work as hospitalists 3 to 6 months each year.1 This definition is sustained through almost all included studies that reported attending time on the wards, with hospitalists working 3 to 7 months and nonhospitalists working less than 3 months, but observed variability does not permit a universal hospitalist definition. It is possible that publication bias influenced our findings toward positive ratings of hospitalists; we reviewed and included meeting abstracts to minimize this bias. We did not review family medicine meeting abstracts.

The included studies had some methodologic strengths, including quasirandom assignment of trainees and use of a contemporaneous control group in almost all studies. However, the overall methodologic strength was fair given limitations in response rates and reporting of cointerventions; we thus considered most studies to represent trends rather than definitive results. Finally, all of the studies meeting our inclusion criteria to date only evaluated trainees' attitudes and beliefs. Because knowledge and skills were not objectively assessed, it is unclear how increased trainee satisfaction translates to knowledge and skill acquisition on the wards. However, Miller's pyramid and its proposed modification, the Cambridge model, suggest that targeting attitudes precedes knowledge acquisition,66 and our study suggests the need for a research agenda examining the impact of hospitalists on trainees' future performance. Griffith et al.67 demonstrated an association between increased satisfaction with teaching and medical students' performance on clerkship examinations and the U.S. Medical Licensing Examination (USMLE) Step 2.

Overall, trainees were more satisfied with hospitalists' teaching and feedback delivery. Our literature search shows that, although there are a limited number of studies of varying level of quality that cannot be compared using meta‐analytic techniques, the currently available data suggests that hospitalists lead to improved learner satisfaction. More studies to delineate the differences between hospitalists and nonhospitalist general internists are needed. Continued exploration of the effects of attending age and rank on trainee learning may help determine whether this effect is reproducible, and what facets of attendings' teaching actually impact trainees' knowledge, skill acquisition, and behaviors. Since all studies only evaluated attitudes, studies analyzing knowledge and skills are required to more fully understand the educational outcomes of the hospitalist model.

References
  1. Wachter RM, Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335:514517.
  2. Society of Hospital Medicine. Definition of a Hospitalist. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=General_ Information130:343349.
  3. Society of Hospital Medicine. Hospital Medicine Specialty Shows 20 Percent Growth. Available at: http://www.hospitalmedicine.org/AM/Template. cfm?Section=Press_Releases21:10791085.
  4. Kralovec PD, Miller JA, Wellikson L, Huddleston JM.The status of hospital medicine groups in the United States.J Hosp Med.2006;1:7580.
  5. Brown MD, Halpert A, McKean S, Sussman A, Dzau VJ.Assessing the value of hospitalists to academic health centers: Brigham and Women's Hospital and Harvard Medical School.Am J Med.1999;106:134137.
  6. Wachter RM, Katz P, Showstack J, Bindman AB, Goldman L.Reorganizing an academic medical service. Impact on cost, quality, patient satisfaction, and education.JAMA.1998;279:15601565.
  7. Wachter RM, Goldman L.Implications of the hospitalist movement for academic departments of medicine: lessons from the UCSF experience.Am J Med.1999;106:127133.
  8. Davis KM, Koch KE, Harvey JK, et al.Effects of hospitalists on cost, outcomes, and patient satisfaction in a rural health system.Am J Med.2000;108:621626.
  9. Craig DE, Hartka L, Likosky WH, et al.Implementation of a hospitalist system in a large health maintenance organization: the Kaiser Permanente experience.Ann Intern Med.1999;130:355359.
  10. Halpert AP, Pearson SD, LeWine HE, McKean SC.The impact of an inpatient physician program on quality, utilization, and satisfaction.Am J Manag Care.2000;6:549555.
  11. Meltzer DO, Shah MN, Morrison J.Decreased length of stay, costs and mortality in a randomized trial of academic hospitalists.J Gen Intern Med.2001;16:S208.
  12. Auerbach AD, Wachter RM, Katz P, Showstack J, Baron RB, Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137(11):859865.
  13. Lindenauer PK, Rothberg MB, Pekow PS, Kenwood C, Benjamin EM, Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;357(25):25892600.
  14. Goldman L.The impact of hospitalists on medical education and the academic health system.Ann Intern Med.1999;130:364367.
  15. Whitcomb WF, Nelson JR.The role of hospitalists in medical education.Am J Med.1999;107:305309.
  16. Hauer KE, Wachter RM.Implications of the hospitalist model for medical students' education.Acad Med.2001;76:324330.
  17. Haftel HM, Bozynski ME.Changing teaching for changing times: the effect of a hospitalist program on the education of students.Acad Med.2000;75:521.
  18. Wachter RM.Reflections: the hospitalist movement a decade later.J Hosp Med.2006;1(4):248252.
  19. Hollander H.Response to the effect of hospitalist systems on residency education: re‐incorporating medical subspecialists.Acad Med.2001;76:555556.
  20. Best Evidence Medical Education (BEME) Collaboration, Dundee, UK. Home page. Available at: http://www.bemecollaboration.org. Accessed May2009.
  21. Kirkpatrick DL.Evaluation of Training. In: Craig R, Mittel I, eds.Training and Development Handbook.New York:McGraw‐Hill;1967:87112.
  22. Kulaga ME, Charney P, O'Mahony SP, et al.The positive impact of initiation of hospitalist clinician educators.J Gen Intern Med.2004;19(4):293301.
  23. Dwight P, MacArthur C, Friedman JN, Parkin PC.Evaluation of a staff‐only hospitalist system in a tertiary care, academic children's hospital.Pediatrics.2004;114(6):15451549.
  24. Homme JH.How pediatric hospitalist programs can affect graduate medical education.Pediatr Ann.2003;32(12):822824.
  25. Marinella MA.A “hospitalist” rotation increases short‐term knowledge of fourth‐year medical students.South Med J.2002;95(3):374.
  26. Wachter RM.The hospitalist movement 10 years later: life as a Swiss army knife.MedGenMed.2006;8(3):30.
  27. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM.Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out.J Hosp Med.2006;1(4):257266.
  28. Pressel DM.Hospitalists in medical education: coming to an academic medical center near you.J Natl Med Assoc.2006;98(9):15011504.
  29. Abbo ED, Volandes AE.Teaching residents to consider costs in medical decision making.Am J Bioeth.2006;6(4):3334.
  30. Association of Program Directors in Internal Medicine;Fitzgibbons JP, Bordley DR, Berkowitz LR, Miller BW, Henderson MC.Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920926.
  31. Ranji SR, Rosenman DJ, Amin AN, Kripalani S.Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1e7.
  32. Wilson SD.Employing hospitalists to improve residents' inpatient learning.Acad Med.2001;76(5):556.
  33. Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV.The spectrum of community‐based hospitalist practice: a call to tailor internal medicine residency training.Arch Intern Med.2007;167(7):727728.
  34. McKean SC, Budnitz TL, Dressler DD, Amin AN, Pistoria MJ.How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med.2006;1(suppl 1):5767.
  35. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(suppl 1):4856.
  36. O'Leary KJ, Liebovitz DM, Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1(2):8893.
  37. Kingston M.Determining the professional attributes of a hospitalist: experience in one Australian metropolitan hospital.Intern Med J.2005;35(5):305308.
  38. Mufson MA.The internal medicine clerkship: the view from the vantage point of one chair of medicine.Am J Med.1999;107(2):109111.
  39. Shea JA, Wasfi YS, Kovath KJ, Asch DA, Bellini LM.The presence of hospitalists in medical education.Acad Med.2000;75(10 suppl):S34S36.
  40. Dent AW, Crotty B, Cuddihy HL, et al.Learning opportunities for Australian prevocational hospital doctors: exposure, perceived quality and desired methods of learning.Med J Aust.2006;184(9):436440.
  41. Khera N, Stroobant J, Primhak RA, Gupta R, Davies H.Training the ideal hospital doctor: the specialist registrars' perspective.Med Educ.2001;35(10):957966.
  42. Chung P, Morrison J, Jin L, Levinson W, Humphrey H, Meltzer D.Resident satisfaction on an academic hospitalist service: time to teach.Am J Med.2002;112(7):597601.
  43. Landrigan CP, Muret‐Wagstaff S, Chiang VW, Nigrin DJ, Goldmann DA, Finkelstein JA.Effect of a pediatric hospitalist system on housestaff education and experience.Arch Pediatr Adolesc Med.2002;156(9):877883.
  44. Hunter AJ, Desai SS, Harrison RA, Chan BK.Medical student evaluation of the quality of hospitalist and nonhospitalist teaching faculty on inpatient medicine rotations.Acad Med.2004;79(1):7882.
  45. Hauer KE, Wachter RM, McCulloch CE, Woo GA, Auerbach AD.Effects of hospitalist attending physicians on trainee satisfaction with teaching and with internal medicine rotations.Arch Intern Med.2004;164(17):18661871.
  46. Kripalani S, Pope AC, Rask K, et al.Hospitalists as teachers.J Gen Intern Med.2004;19(1):815.
  47. Geskey JM, Kees‐Folts D.Third‐year medical students' evaluation of hospitalist and nonhospitalist faculty during the inpatient portion of their pediatrics clerkships.J Hosp Med.2007;2(1):1722.
  48. Arora V, Wetterneck T, Schnipper J, et al. The effects of hospitalist teaching attendings on medical student satisfaction and career interest: results from the multicenter hospitalist study. Society of Hospital Medicine;2005 Annual Meeting Abstracts.
  49. Chintharajah S, Aronowitz P. Hospitalist teachers may lose their superiority over non‐hospitalist teachers in “mature” hospitalist systems. Society of General Internal Medicine;2006 Annual Meeting Abstracts.
  50. Hunter A, Desai S, Harrison R, Chan B. Medical student evaluation of the quality of hospitalist and non‐hospitalist teaching faculty on inpatient medicine rotations. Society of Hospital Medicine;2003 Annual Meeting Abstracts.
  51. Hauer KE, Auerbach A, Woo GA, Wachter RM. Effects of hospitalist attendings on trainee satisfaction with rotations. Society of General Internal Medicine;2002 Annual Meeting Abstracts.
  52. Phy M, Rosenman D, Huddleston J. Internal medicine and orthopedic residents' perception of education and satisfaction after the initiation of a non‐resident hospitalist service. Society of Hospital Medicine;2004 Annual Meeting Abstracts.
  53. O'Leary K, Chadha V, Fleming V, Baker D. Medical subinternship: student experience on a resident uncovered hospitalist service. Society of Hospital Medicine;2006 Annual Meeting Abstracts.
  54. Hefner JE, Elnicki DM, Barnard K, Painter T, McNeil M. A randomized controlled trial to evaluate the effect of dedicated clinical teachers (or “Educationalists”) on the internal medicine clerkship experience. Society of General Internal Medicine;2002 Annual Meeting Abstracts.
  55. Marratta D, Rajan S, Novotny J. Internal medicine residency program goals drive the development of hospitalist programs at teaching hospitals. Society of Hospital Medicine;2002 Annual Meeting Abstracts.
  56. McKean S, Hafler J. The role of the hospitalist in teaching. Society of General Internal Medicine;2003 Annual Meeting Abstracts.
  57. McLeod PJ, James CA, Abrahamowicz M.Clinical tutor evaluation: a 5‐year study by students on an inpatient service and residents in an ambulatory care clinic.Med Educ.1993;27:4854.
  58. Wright SM, Kern DE, Kolodner K, Howard DM, Brancati FL.Attributes of excellent attending‐physician role models.N Engl J Med.1998;339:19861992.
  59. Irby DM, Gillmore GM, Ramsey PG.Factors affecting ratings of clinical teachers by medical students and residents.J Med Educ.1987;62:17.
  60. Kroenke K, Simmons JO, Copley JB, Smith C.Attending rounds: a survey of physician attitudes.J Gen Intern Med.1990;5:229233.
  61. Goldenberg J, Glasheen JJ.Hospitalist educators: future of inpatient internal medicine training.Mt Sinai J Med.2008;75:430435.
  62. Landrigan CP, Conway PH, Edwards S, Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics.2006;117:17361744.
  63. Speer AJ, Solomon DJ, Fincher RM.Grade inflation in internal medicine clerkships: results of a national survey.Teach Learn Med.2000;12:112116.
  64. Rethans JJ, Norcini JJ, Barón‐Maldonado M, et al.The relationship between competence and performance: implications for assessing practice performance.Med Educ.2002;36(10):901909.
  65. Griffith CH, Georgesen JC, Wilson JF.Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances.Acad Med.2000;75(10 suppl):S62S64.
Article PDF
Issue
Journal of Hospital Medicine - 4(8)
Publications
Page Number
490-498
Legacy Keywords
clinical clerkship/methods, hospitalist, hospital teaching, internship methods, program evaluation, residency/methods
Sections
Article PDF
Article PDF

Wachter and Goldman1 described the hospitalist model for inpatient care more than a decade ago. The Society of Hospital Medicine (SHM) defines hospitalists as physicians whose primary professional focus is the general medical care of hospitalized patients. Their activities include patient care, teaching, research, and leadership related to hospital medicine.2 This care delivery model has enjoyed exponential growth, with approximately 20,000 hospitalists in the United States, and an estimated 30,000 by the end of the decade.35 Currently, 29% of hospitals, including 55% with at least 200 beds, employ hospitalists to coordinate inpatient care.6 Data suggests that hospitalists promote cost containment and decrease length of stay without negatively affecting rates of death, readmission, or patient satisfaction.715

In academic settings, hospitalists also provide a substantial amount of teaching to trainees,1618 and the hospitalist model represents a fundamental change in inpatient education delivery. Traditional ward attendings typically consisted of a heterogeneous group of subspecialists, laboratory‐based clinician scientists, and general internists, many of whom attended and taught relatively infrequently. By virtue of focusing purely on inpatient care, hospitalists are more intimately involved with inpatient care systems, as well as teaching challenges (and opportunities) in the inpatient setting. The theoretical educational benefits of hospitalists include greater availability, more expertise in hospital medicine, and more emphasis on cost‐effective care.7, 18, 19 Concerns that trainees would have diminished autonomy and less exposure to subspecialist care have not been borne out.16, 20, 21

The purpose of this study was to examine the role of hospitalists on inpatient trainee education. We systematically reviewed the literature to determine the impact of hospitalists compared to nonhospitalist attendings on medical students' and residents' education.

MATERIALS AND METHODS

Data Sources

We searched the MEDLINE, Database of Reviews of Effectiveness (DARE), National Health Service (NHS) Economic Evaluation Database (EED), Health Technology Assessment (HTA), and Cochrane Collaboration databases for citations using the term hospitalist through November 2007, and updated the literature search through October 1, 2008. Additionally, we manually searched the bibliographies of relevant retrieved articles and national meeting abstracts from the SHM (2002‐2007), Society of General Internal Medicine (SGIM) (2001‐2007), and Pediatric Academic Societies (PAS) (2000‐2007). The authors of included meeting abstracts were contacted for additional information.

Data Selection

We included English‐language studies that reported the effects of hospitalist attending physicians on the knowledge, skills, or attitudes of medical students or residents in an inpatient setting, and compared these outcomes to a comparison group of trainees taught by nonhospitalist attending physicians. We excluded opinion articles, review articles, descriptions of curricula, surveys of program leaders, and evaluations of teaching without trainee assessments.

Data Extraction

We developed a standardized data extraction form based on the Best Evidence Medical Education (BEME) Collaboration protocol.22 The following information was extracted from each article: study design and measurement scale; attending and trainee information; study setting; response rate, if available; outcomes measuring attending physician's teaching ability; and outcomes assessing trainees' attitudes, knowledge, and skills. Open‐ended items solicited overall impression, concerns, new insights, and avenues for research not already captured in the data extraction form. A meta‐analysis was not performed due to varying measures for teacher assessments.

One investigator (P.N.) performed the literature search and a second investigator (K.E.H.) reviewed and confirmed the appropriateness of the articles retained and excluded based on review of the titles and abstracts. Next, 3 investigators (P.N., K.E.H., S.R.) confirmed that all the included articles met inclusion criteria. All 3 independently abstracted each article and coded the strength of findings and methodological quality based on: (1) response rate: (2) number of trainees and attendings; (3) control for additional education interventions; (4) explicit indication of random allocation of trainees to attendings; and (5) presence of a contemporaneous comparison group of nonhospitalist attendings. The level of behavioral impact by the 4‐level Kirkpatrick hierarchy was also recorded for each study to assess the strength of the intervention.23 The strength of data was rated for each study on a scale of 1 to 5, with 1 = no clear conclusions can be drawn; 2 = results ambiguous, but appears to be a trend; 3 = conclusions can probably be based on results; 4 = results are clear and very likely to be true; and 5 = results are unequivocal. Disagreements about search criteria, data extraction, and classification of study results were resolved by consensus.

RESULTS

Search Results

The database searches yielded 711 articles (Figure 1). Based on review of titles and abstracts, 32 articles were retrieved for full‐text review. During full‐text review, we eliminated 26 studies because they had no nonhospitalist control group,7, 16, 18, 2427 were opinion or review articles,19, 21, 2834 examined hospitalists' roles without trainee outcomes,17, 3540 surveyed program administration,41 or did not involve hospitalists.42, 43 Ultimately, 6 citations published between 2002 and 2007 met all inclusion criteria (Table 1).4449 The updated literature search through October 1, 2008 did not yield any additional relevant studies.

Figure 1
Search and selection of included articles.
Summary of Studies
Location, yearreference Learners (n) Number of Attendings Attending Ward Responsibilities (weeks per year) Attending Experience (mean years postgraduation) Attending Gender (% female) Survey Response Rate (%) Data Strength
  • Meeting abstracts.

  • Brigham & Women's Hospital, University of California San Francisco, University of Chicago, University of Washington, University of Illinois, University of New Mexico.

  • Data strength: 1 (no clear conclusions can be drawn), 2 (results ambiguous, but appears to be a trend), 3 (conclusions can probably be based on results), 4 (results are clear and very likely to be true), 5 (results are unequivocal).

University of Chicago, 200244 PGY‐unspecified (86) 2‐4 hospitalists; unknown nonhospitalists 12‐24 hospitalists; 4‐8 nonhospitalists 58 2
Children's Hospital, Boston, 200245 PGY‐1, PGY‐3 (unknown) 8 hospitalists; 75 nonhospitalists 12‐16 hospitalists; 2‐4 nonhospitalists 63 2
Oregon Health & Sciences, 200446 MS3 (138) 6 hospitalists; 11 nonhospitalists 22.8 hospitalists; 6.4 nonhospitalists 4.2 hospitalists; 10.9 nonhospitalists 2/6 (33%) hospitalists; 4/11 (36%) nonhospitalists 72 3
University of California, San Francisco, 200447 MS3‐4, PGY1‐3 (917) 17 hospitalists; 39 general internists; 13 subspecialists 12 hospitalists; 3.24 nonhospitalists 6/17 (35%) hospitalists; 17/52 (33%) nonhospitalists 91 4
Grady Memorial, 200448 MS3‐4, PGY1‐3 (unknown) 12 hospitalists; 27 general internists; 51 subspecialists 24 hospitalists; 6 nonhospitalists 6.1 hospitalists; 9.7 general internists; 21.6 subspecialists 6/12 (50%) hospitalists; 16/51 (31%) nonhospitalists 81 3
Penn State Children's Hospital, 200749 MS3 (67) 2 hospitalists; 8 nonhospitalists 2 MDs covered 32 hospitalists; 8 MDs covered 28 nonhospitalists 1/2 (50%) hospitalists; 2/8 (25%) nonhospitalists 100 3
Multiple sites, 200550* MS3 (294) 54 2
California Pacific Medical Center, 200651* PGY‐unspecified (unknown) 1

Examination of meeting abstracts yielded a total of 7,062 abstracts (Figure 2), of which 9 abstracts were retrieved for full‐text review. Two abstracts met inclusion criteria (Table 1).50, 51 Excluded meeting abstracts included published studies that were already abstracted as manuscripts,52, 53 had no nonhospitalist control group,54, 55 did not involve hospitalists,56 surveyed program administrators,57 or examined hospitalists' roles without trainee outcomes.58 Our communications with abstract authors did not yield any relevant additional information.

Figure 2
Search and selection of included meeting abstracts.

Study Settings, Designs, and Outcomes

Six of 8 included studies occurred in an internal medicine inpatient setting: 4 in university hospitals,44, 46, 47, 50 1 in a public safety‐net hospital,48 and 1 in a community teaching hospital.51 The remaining 2 studied the inpatient pediatric wards in university hospitals.45, 49

In 7 of 8 included studies, trainees were assigned to work with hospitalists or nonhospitalists according to the study site's standard method for allocating trainees to rotations; trainees were not allowed to choose their supervising attending. We considered these studies to be quasirandomized. The other study compared nonhospitalist attending evaluations the year prior to implementing hospitalists to hospitalist attending evaluations the year afterward.45

Studies measured trainee attitudes through routinely administered evaluations,46, 47, 49, 51 dedicated surveys,44, 48, 50 or both.45 One also qualitatively coded trainees' written responses to determine themes.48

Characteristics of Learners

Studies assessed only residents,44, 45, 51 only third‐year medical students,46, 49, 50 or residents and third‐year and fourth‐year medical students.47, 48 The amount of time trainees spent with each attending physician ranged from 2 to 4 weeks. One‐half of the studies reported the number of trainees responding to surveys in each attending group. Two studies had an equivalent number of trainees respond for each attending group,47, 49 while the other 2 had approximately twice as many trainees working with hospitalists respond.46, 50 No studies reported other characteristics of trainees assigned to the different attending groups.

Characteristics of Attendings

Hospitalists were described as attending between 12 and 32 weeks per year while nonhospitalists worked 2 to 12 weeks, except in 1 study where nonhospitalists worked 28 weeks (Table 1).49 Two studies separated nonhospitalists into general internists and subspecialists47, 48 but only 1 contrasted the weeks on service for the 2 groups of nonhospitalists.48 On average, hospitalists tended to be younger and have less experience than nonhospitalist attendings (Table 1). In those reporting attending gender, there was no significant difference between the 2 attending groups.

Methodological Quality

Because all of the included studies only evaluated trainee attitudes, they were all coded as Level 1 by the Kirkpatrick hierarchy for covering learners' views on the learning experience, its organization, presentation, content, teaching methods, and aspects of the instructional organization, materials, quality of instruction.23

The methodological quality of the studies varied. Seven studies used a contemporaneous control group, and 145 employed a noncontemporaneous comparison of hospitalists to nonhospitalists. Seven included studies reported the trainee response rate, which varied widely (from 54% to 100%) (Table 1). None of the studies reported whether any other educational interventions that could have biased study results were implemented during the study period. Of the 6 published studies, the strength of the data for 5 studies was rated as a 2 or 3 and for 1 the strength was rated a 4 (Table 1).

Trainee Evaluations Comparing Hospitalists to All Nonhospitalists

The most commonly evaluated attending measures included trainees' overall satisfaction with attendings (n = 8 studies),4451 trainees' ratings of teaching effectiveness (n = 5 studies),44, 46, 47, 49, 50 attending effectiveness of feedback delivery (n = 4 studies),4548 trainees' perceptions of attending knowledge (n = 3 studies),45, 47, 48 and attending involvement of trainees in patient care decisions (n = 3 studies) (Table 2).44, 45, 47 Several other outcomes were reported in 2 or fewer studies (Table 3). All studies reported nonnormally distributed evaluation ratings, with trainee ratings of all attending groups skewed toward high ratings.

Trainee Ratings of Attending Teaching
Number of Studies Evaluated Hospitalists Better Nonhospitalists Better No Difference
  • NOTE: Studies that achieved statistical significant in demonstrating increased trainee satisfaction for each domain are listed in each attending group's column.

  • Hospitalists compared to subspecialists.

  • Hospitalists compared to general internists.

Overall rating of attending 8 44‐46, 47*, 48‐51 47
Teaching effectiveness 5 44, 48‐50 46
Feedback delivery 4 45, 47*, 48 47 46
Involvement of trainees in patient care decisions 3 45, 48 44
Quality of ward rounds 2 44, 49
Effectiveness as a role model 2 45, 48
Communication of rotation goals 1 46
Emphasizes evidence‐based care 1 48
Emphasizes cost‐effective care 1 47
Availability 2 45 48
Perceived knowledge 3 45, 48 47
Bedside teaching 1 45
Apparent interest in psychosocial aspects of care 1 47* 47
Results of Studies Evaluating Hospitalists vs. Nonhospitalists
Reference Citation, Location, Year Study Design Major Findings Data Strength
  • Meeting abstracts.

  • Brigham & Womens Hospitals University of California‐San Fransisco, University of Chicago, University of Washington, University of Illinois, University of New Mexico.

  • NOTE: Shows the individual study results for outcomes measured in 3 or more studies.

  • Abbreviations: CI, confidence interval, MS, medical student; PGC, postgraduate year; SD, standard deviation.

Chung et al.,44 University of Chicago, 2002 Retrospective, quasirandomized with contemporaneous controls % of Internal Medicine house staff very satisfied with Internal Medicine attendings (5‐point scale, 5 = very satisfied): End of month: hospitalist 58%, nonhospitalist 39%; end of year: hospitalists 76%, nonhospitalists 48%. Compared to residents who did not work with hospitalists, residents with experience with hospitalists had fewer concerns about loss of autonomy (8% vs. 41%, P = 0.02), and no difference in concerns about exposure to different faculty (41% vs. 60%, P = 0.08) 2
Landrigan et al.,45 Children's Hospital, Boston, 2002 Retrospective, single group with historical control Overall satisfaction with inpatient experience (4‐point scale, 4 = extremely satisfied): interns, 3.5 with hospitalists, 3.2 with nonhospitalists. PGY3, 3.5 with hospitalists, 3.5 with nonhospitalists. Rating of teaching effectiveness (5‐point scale, 5 = excellent): hospitalists 4.7, nonhospitalists 4.4. PGY3s reported less ability to make decisions independently, less ability to supervise with hospitalist attendings, but differences did not meet statistical significance (P = 0.07). 2
Hunter et al.,46 Oregon Health & Sciences, 2004 Retrospective, quasirandomized with contemporaneous controls MS3 combined overall rating of attending during Internal Medicine clerkship (9‐point scale, 9 = outstanding): hospitalists 8.56, nonhospitalists 8.22. Combined rating was a composite of 7 parameters (communication of rotation goals, establishing learning climate, use of educational time, teaching style, evaluation and feedback, contribution to growth and development, and effectiveness as clinical teacher). 3
Hauer et al.,47 University of California, San Francisco, 2004 Retrospective, quasirandomized with contemporaneous controls Internal medicine house staff, MS4 and MS3 overall satisfaction with Internal Medicine attending (9‐point scale, 9 = excellent): hospitalists 8.3 (SD 0.9), nonhospitalist general internists 7.9 (SD 1.3), subspecialists 8.1 (SD 1.7); P = 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.20 for comparison of hospitalists vs. subspecialists. Attending teaching effectiveness (5‐point scale, 5 = excellent): hospitalists 4.8 (SD 0.6), general internists 4.5 (SD 0.8), specialists 4.5 (SD 1.1); P < 0.001 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.03 for comparison of hospitalists vs. subspecialists. Attending knowledge (9‐point scale): hospitalists 8.2 (SD 1.1), nonhospitalists 7.9 (SD 1.2), subspecialists 8.1 (SD 1.5); P < 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.10 for comparison of hospitalists vs. subspecialists. Attending valuation of trainee opinions (9‐point scale): hospitalists 8.3 (SD 0.9), nonhospitalist generalists 8.2 (SD 1.3), subspecialists 8.1 (SD 1.7); P = 0.20 for comparison of hospitalists vs. nonhospitalist generalists; P = 0.60 for comparison of hospitalist vs. subspecialists. Provision of feedback (9‐point scale): hospitalists 7.9 (SD 1.6), nonhospitalist generalists 7.2 (SD 2.3), subspecialists 7.0 (SD 2.5); P < 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.01 for comparison of hospitalists vs. subspecialists. 4
Kripalani et al.,48 Grady Memorial, 2004 Retrospective, quasirandomized with contemporaneous controls Internal medicine house staff, MS4 and MS3 satisfaction with Internal Medicine attending teaching effectiveness (25‐item McGill Clinical Tutor Evaluation, maximum score 150): hospitalists 134.5 (95% CI, 130.2‐138.8), general internists 135.0 (95% CI, 131.2‐138.8), specialists 126.3 (95% CI, 120.4‐132.1). 3
Geskey and Kees‐Folts,49 Penn State Children's Hospital, 2007 Retrospective, quasirandomized with contemporaneous controls MS3 overall satisfaction with Pediatric attending teaching (4‐point scale, 4 = excellent), hospitalists 3.9, nonhospitalists 3.0. MS3s rated hospitalists higher than nonhospitalists in all 4 attending characteristics measured: teaching effectiveness, effectiveness as a pediatrician, student advocacy effectiveness, and overall. 3
Arora et al.,50 Multiple sites, 2005*, Retrospective, quasirandomized with contemporaneous controls MS3 overall satisfaction with Internal Medicine clerkship (5‐point scale, 5 = very satisfied): hospitalists 4.5, nonhospitalists 4.3. Trends toward greater emphasis on education (P = 0.07) and higher quality attending rounds (P = 0.07) with hospitalists. Effects of hospitalists on resident perceptions of autonomy not reported. 2
Chintharajah and Aronowitz,51 California Pacific Medical Center, 2006* Retrospective, with contemporaneous controls. Method of assignment to attending type not stated. Internal Medicine house staff ratings of Internal Medicine attendings: Using a 9‐point scale in 1998‐2002, then 5‐point scale in 2003‐2005, Hospitalists were rated higher than nonhospitalists in all areas assessed in 1998‐2002, but were rated higher in only 3 areas in 2003‐2005 (accessibility, feedback, and teaching procedures.) Data not shown. 1

Of the 8 studies comparing hospitalists to all nonhospitalists, trainees were statistically significantly more satisfied with hospitalists in all but 1 (Table 3).4451 Hospitalists' overall teaching effectiveness was rated significantly higher in 4 studies,44, 47, 49, 50 but 1 did not demonstrate a difference.46 Hospitalists were also rated higher at feedback delivery compared to all nonhospitalists, with 2 studies45, 47 and 1 abstract reporting hospitalists' superiority. One other study showed increased satisfaction with hospitalists' feedback only compared to subspecialists.48 Hospitalists were perceived as being more knowledgeable and allowing greater trainee involvement in patient care decisions, in 2 of 3 studies addressing each of these questions. In order to evaluate preconceived notions, 1 study demonstrated that residents who never worked with hospitalists were significantly more concerned about hospitalists negatively impacting their clinical autonomy than residents who had worked with hospitalists at least once.44

Hospitalists were rated as more available in 1 study45 with a trend toward more availability in another.47 Trainee satisfaction was higher with hospitalists on other measures including quality of ward rounds,44, 49 effectiveness as a role model,45, 48 communication of rotations' goals,46 emphasis on evidence‐based medicine,48 and emphasis on cost‐effective care.47 In 1 study, trainees were significantly more satisfied with the bedside teaching of nonhospitalists.45 In another, trainees felt that, compared to hospitalists, general internists seemed to be more interested in the psychosocial aspects of patients' care.48

Trainee Evaluations Comparing Hospitalists to Outpatient Generalists and Subspecialists

Of the studies that examined whether the type of nonhospitalist (general internist vs. subspecialist) impacted trainee ratings, 1 showed that trainees were equally satisfied with hospitalists and general internists but that general internists were rated higher than hospitalists for feedback delivery.48 Hospitalists were rated significantly higher than subspecialists overall and for feedback delivery.48 The other study that subclassified nonhospitalists into general internists and subspecialists showed that hospitalists were more highly rated than both general internists and subspecialists overall and for teaching effectiveness and feedback delivery.47

DISCUSSION

This systematic review of the literature describing hospitalists as educators shows that trainees are generally more satisfied with hospitalists than nonhospitalists on their inpatient rotations. Hospitalists were rated more highly than traditional ward attendings overall, and for teaching effectiveness44, 47, 49, 50 and feedback delivery.45, 47 Limited data (3 studies each) indicates that trainees perceive hospitalists as being at least as knowledgeable as traditional attendings, and encouraging similar levels of trainee involvement in patient care decisions. Trainees may be more satisfied with hospitalists than with general internists or subspecialists, although some comparisons have shown that general internists may be preferred. No studies have evaluated the impact of hospitalists on trainee outcomes beyond satisfaction, such as knowledge acquisition, rotation grades, or clinical performance.

Our review suggests that, with increased time spent on the wards, hospitalists exhibit attributes consistent with specialization in inpatient care.1, 14 Hospitalists were noted to emphasize cost‐effectiveness47 and evidence‐based medicine48 and to conduct higher‐quality ward rounds.44, 49 Hospitalists are uniquely qualified to teach about inpatient goals and processes such as decreasing length of stay in the hospital and cost‐effective care.1, 3, 7, 12, 15 Trainees see hospitalists as role models,45, 47 and the site‐defined nature of hospital medicine promotes trainees' access to hospitalist attendings. Such accessibility has been described as an independent attribute of excellent physician role models,59, 60, 62 Our findings from our methodologically rigorous systematic review of the literature extend the conclusions of a narrative review of the literature on hospitalists as educators that also identified favorable ratings of hospitalists, with some unresolved concerns about resident autonomy and the role of subspecialist teachers in hospitalist systems.63

Diminished trainee autonomy was an early concern about hospitalists in academic medical centers.16, 20, 21 In the earliest study we identified that assessed autonomy, trainees perceived similar amounts of autonomy with hospitalists compared to nonhospitalists.44 Interestingly, house staff in more experienced hospitalist models even described experiencing increased involvement in patient care when supervised by hospitalist attendings in both the pediatric and internal medicine settings.45, 47 Hospitalists might also generate more clinical diversity for house staff by reducing length of stay and thereby enhancing opportunities for learning with newly admitted patients.13, 14, 64

The studies that did not demonstrate increased satisfaction with hospitalists may be instructive as well. One negative study46 reported results from a program that instituted the hospitalist model in response to declining trainee satisfaction. With an emphasis on improving the educational experience, nonhospitalist physicians who were already rated highly as teachers were also selected to attend on the wards. Nonetheless, trainees still were more satisfied with hospitalists overall. One study showed that hospitalists were rated more highly than subspecialists when delivering feedback but less so than general internists.47 The authors suggest that their general internists may have been at a more optimum demographic by being a few more years out of training; such correlations of age and rank to evaluations have not been previously described.60, 61

The disadvantages of hospitalists in trainee education identified by this systematic review include the quality of bedside teaching in one study45 and interest in psychosocial aspects of care in another48 compared to general internists. The decline in satisfaction with bedside teaching is a concern but the comparison was noncontemporaneous and the authors explained that the team size increased and resulted in an overall decrease in time at the bedside.45 The concern that decreased patient length of stays may translate to less time spent with patients and less bedside teaching is not new.18 Although hospitalists have shown particular educational advantages, the balance of clinical efficiency and education remains challenging. Trainees' perception that hospitalists were less interested in the psychosocial aspects of care compared to general internists48 was also anticipated when inpatient attending models began to shift, because hospitalization may now be viewed by trainees as discontinuous from a patient's outpatient care and social situation.18 Nevertheless, hospitalists have been able to achieve such quality measures as decreased length of stay without decreasing patient satisfaction.10, 12

Our study has several limitations. First, all attendings were rated highly in all studies. These high ratings are commonly seen with educational evaluations,65 and this phenomenon creates a ceiling effect that limits variability within the group. Nevertheless, trainees rated hospitalists significantly higher than nonhospitalists overall in all of the included studies. The impact of these small but significant differences on trainees' learning and future clinical performance is unknown. Additionally, the distinction between hospitalists and nonhospitalists was not universal. Initially, it was proposed that academic hospitalists work as hospitalists 3 to 6 months each year.1 This definition is sustained through almost all included studies that reported attending time on the wards, with hospitalists working 3 to 7 months and nonhospitalists working less than 3 months, but observed variability does not permit a universal hospitalist definition. It is possible that publication bias influenced our findings toward positive ratings of hospitalists; we reviewed and included meeting abstracts to minimize this bias. We did not review family medicine meeting abstracts.

The included studies had some methodologic strengths, including quasirandom assignment of trainees and use of a contemporaneous control group in almost all studies. However, the overall methodologic strength was fair given limitations in response rates and reporting of cointerventions; we thus considered most studies to represent trends rather than definitive results. Finally, all of the studies meeting our inclusion criteria to date only evaluated trainees' attitudes and beliefs. Because knowledge and skills were not objectively assessed, it is unclear how increased trainee satisfaction translates to knowledge and skill acquisition on the wards. However, Miller's pyramid and its proposed modification, the Cambridge model, suggest that targeting attitudes precedes knowledge acquisition,66 and our study suggests the need for a research agenda examining the impact of hospitalists on trainees' future performance. Griffith et al.67 demonstrated an association between increased satisfaction with teaching and medical students' performance on clerkship examinations and the U.S. Medical Licensing Examination (USMLE) Step 2.

Overall, trainees were more satisfied with hospitalists' teaching and feedback delivery. Our literature search shows that, although there are a limited number of studies of varying level of quality that cannot be compared using meta‐analytic techniques, the currently available data suggests that hospitalists lead to improved learner satisfaction. More studies to delineate the differences between hospitalists and nonhospitalist general internists are needed. Continued exploration of the effects of attending age and rank on trainee learning may help determine whether this effect is reproducible, and what facets of attendings' teaching actually impact trainees' knowledge, skill acquisition, and behaviors. Since all studies only evaluated attitudes, studies analyzing knowledge and skills are required to more fully understand the educational outcomes of the hospitalist model.

Wachter and Goldman1 described the hospitalist model for inpatient care more than a decade ago. The Society of Hospital Medicine (SHM) defines hospitalists as physicians whose primary professional focus is the general medical care of hospitalized patients. Their activities include patient care, teaching, research, and leadership related to hospital medicine.2 This care delivery model has enjoyed exponential growth, with approximately 20,000 hospitalists in the United States, and an estimated 30,000 by the end of the decade.35 Currently, 29% of hospitals, including 55% with at least 200 beds, employ hospitalists to coordinate inpatient care.6 Data suggests that hospitalists promote cost containment and decrease length of stay without negatively affecting rates of death, readmission, or patient satisfaction.715

In academic settings, hospitalists also provide a substantial amount of teaching to trainees,1618 and the hospitalist model represents a fundamental change in inpatient education delivery. Traditional ward attendings typically consisted of a heterogeneous group of subspecialists, laboratory‐based clinician scientists, and general internists, many of whom attended and taught relatively infrequently. By virtue of focusing purely on inpatient care, hospitalists are more intimately involved with inpatient care systems, as well as teaching challenges (and opportunities) in the inpatient setting. The theoretical educational benefits of hospitalists include greater availability, more expertise in hospital medicine, and more emphasis on cost‐effective care.7, 18, 19 Concerns that trainees would have diminished autonomy and less exposure to subspecialist care have not been borne out.16, 20, 21

The purpose of this study was to examine the role of hospitalists on inpatient trainee education. We systematically reviewed the literature to determine the impact of hospitalists compared to nonhospitalist attendings on medical students' and residents' education.

MATERIALS AND METHODS

Data Sources

We searched the MEDLINE, Database of Reviews of Effectiveness (DARE), National Health Service (NHS) Economic Evaluation Database (EED), Health Technology Assessment (HTA), and Cochrane Collaboration databases for citations using the term hospitalist through November 2007, and updated the literature search through October 1, 2008. Additionally, we manually searched the bibliographies of relevant retrieved articles and national meeting abstracts from the SHM (2002‐2007), Society of General Internal Medicine (SGIM) (2001‐2007), and Pediatric Academic Societies (PAS) (2000‐2007). The authors of included meeting abstracts were contacted for additional information.

Data Selection

We included English‐language studies that reported the effects of hospitalist attending physicians on the knowledge, skills, or attitudes of medical students or residents in an inpatient setting, and compared these outcomes to a comparison group of trainees taught by nonhospitalist attending physicians. We excluded opinion articles, review articles, descriptions of curricula, surveys of program leaders, and evaluations of teaching without trainee assessments.

Data Extraction

We developed a standardized data extraction form based on the Best Evidence Medical Education (BEME) Collaboration protocol.22 The following information was extracted from each article: study design and measurement scale; attending and trainee information; study setting; response rate, if available; outcomes measuring attending physician's teaching ability; and outcomes assessing trainees' attitudes, knowledge, and skills. Open‐ended items solicited overall impression, concerns, new insights, and avenues for research not already captured in the data extraction form. A meta‐analysis was not performed due to varying measures for teacher assessments.

One investigator (P.N.) performed the literature search and a second investigator (K.E.H.) reviewed and confirmed the appropriateness of the articles retained and excluded based on review of the titles and abstracts. Next, 3 investigators (P.N., K.E.H., S.R.) confirmed that all the included articles met inclusion criteria. All 3 independently abstracted each article and coded the strength of findings and methodological quality based on: (1) response rate: (2) number of trainees and attendings; (3) control for additional education interventions; (4) explicit indication of random allocation of trainees to attendings; and (5) presence of a contemporaneous comparison group of nonhospitalist attendings. The level of behavioral impact by the 4‐level Kirkpatrick hierarchy was also recorded for each study to assess the strength of the intervention.23 The strength of data was rated for each study on a scale of 1 to 5, with 1 = no clear conclusions can be drawn; 2 = results ambiguous, but appears to be a trend; 3 = conclusions can probably be based on results; 4 = results are clear and very likely to be true; and 5 = results are unequivocal. Disagreements about search criteria, data extraction, and classification of study results were resolved by consensus.

RESULTS

Search Results

The database searches yielded 711 articles (Figure 1). Based on review of titles and abstracts, 32 articles were retrieved for full‐text review. During full‐text review, we eliminated 26 studies because they had no nonhospitalist control group,7, 16, 18, 2427 were opinion or review articles,19, 21, 2834 examined hospitalists' roles without trainee outcomes,17, 3540 surveyed program administration,41 or did not involve hospitalists.42, 43 Ultimately, 6 citations published between 2002 and 2007 met all inclusion criteria (Table 1).4449 The updated literature search through October 1, 2008 did not yield any additional relevant studies.

Figure 1
Search and selection of included articles.
Summary of Studies
Location, yearreference Learners (n) Number of Attendings Attending Ward Responsibilities (weeks per year) Attending Experience (mean years postgraduation) Attending Gender (% female) Survey Response Rate (%) Data Strength
  • Meeting abstracts.

  • Brigham & Women's Hospital, University of California San Francisco, University of Chicago, University of Washington, University of Illinois, University of New Mexico.

  • Data strength: 1 (no clear conclusions can be drawn), 2 (results ambiguous, but appears to be a trend), 3 (conclusions can probably be based on results), 4 (results are clear and very likely to be true), 5 (results are unequivocal).

University of Chicago, 200244 PGY‐unspecified (86) 2‐4 hospitalists; unknown nonhospitalists 12‐24 hospitalists; 4‐8 nonhospitalists 58 2
Children's Hospital, Boston, 200245 PGY‐1, PGY‐3 (unknown) 8 hospitalists; 75 nonhospitalists 12‐16 hospitalists; 2‐4 nonhospitalists 63 2
Oregon Health & Sciences, 200446 MS3 (138) 6 hospitalists; 11 nonhospitalists 22.8 hospitalists; 6.4 nonhospitalists 4.2 hospitalists; 10.9 nonhospitalists 2/6 (33%) hospitalists; 4/11 (36%) nonhospitalists 72 3
University of California, San Francisco, 200447 MS3‐4, PGY1‐3 (917) 17 hospitalists; 39 general internists; 13 subspecialists 12 hospitalists; 3.24 nonhospitalists 6/17 (35%) hospitalists; 17/52 (33%) nonhospitalists 91 4
Grady Memorial, 200448 MS3‐4, PGY1‐3 (unknown) 12 hospitalists; 27 general internists; 51 subspecialists 24 hospitalists; 6 nonhospitalists 6.1 hospitalists; 9.7 general internists; 21.6 subspecialists 6/12 (50%) hospitalists; 16/51 (31%) nonhospitalists 81 3
Penn State Children's Hospital, 200749 MS3 (67) 2 hospitalists; 8 nonhospitalists 2 MDs covered 32 hospitalists; 8 MDs covered 28 nonhospitalists 1/2 (50%) hospitalists; 2/8 (25%) nonhospitalists 100 3
Multiple sites, 200550* MS3 (294) 54 2
California Pacific Medical Center, 200651* PGY‐unspecified (unknown) 1

Examination of meeting abstracts yielded a total of 7,062 abstracts (Figure 2), of which 9 abstracts were retrieved for full‐text review. Two abstracts met inclusion criteria (Table 1).50, 51 Excluded meeting abstracts included published studies that were already abstracted as manuscripts,52, 53 had no nonhospitalist control group,54, 55 did not involve hospitalists,56 surveyed program administrators,57 or examined hospitalists' roles without trainee outcomes.58 Our communications with abstract authors did not yield any relevant additional information.

Figure 2
Search and selection of included meeting abstracts.

Study Settings, Designs, and Outcomes

Six of 8 included studies occurred in an internal medicine inpatient setting: 4 in university hospitals,44, 46, 47, 50 1 in a public safety‐net hospital,48 and 1 in a community teaching hospital.51 The remaining 2 studied the inpatient pediatric wards in university hospitals.45, 49

In 7 of 8 included studies, trainees were assigned to work with hospitalists or nonhospitalists according to the study site's standard method for allocating trainees to rotations; trainees were not allowed to choose their supervising attending. We considered these studies to be quasirandomized. The other study compared nonhospitalist attending evaluations the year prior to implementing hospitalists to hospitalist attending evaluations the year afterward.45

Studies measured trainee attitudes through routinely administered evaluations,46, 47, 49, 51 dedicated surveys,44, 48, 50 or both.45 One also qualitatively coded trainees' written responses to determine themes.48

Characteristics of Learners

Studies assessed only residents,44, 45, 51 only third‐year medical students,46, 49, 50 or residents and third‐year and fourth‐year medical students.47, 48 The amount of time trainees spent with each attending physician ranged from 2 to 4 weeks. One‐half of the studies reported the number of trainees responding to surveys in each attending group. Two studies had an equivalent number of trainees respond for each attending group,47, 49 while the other 2 had approximately twice as many trainees working with hospitalists respond.46, 50 No studies reported other characteristics of trainees assigned to the different attending groups.

Characteristics of Attendings

Hospitalists were described as attending between 12 and 32 weeks per year while nonhospitalists worked 2 to 12 weeks, except in 1 study where nonhospitalists worked 28 weeks (Table 1).49 Two studies separated nonhospitalists into general internists and subspecialists47, 48 but only 1 contrasted the weeks on service for the 2 groups of nonhospitalists.48 On average, hospitalists tended to be younger and have less experience than nonhospitalist attendings (Table 1). In those reporting attending gender, there was no significant difference between the 2 attending groups.

Methodological Quality

Because all of the included studies only evaluated trainee attitudes, they were all coded as Level 1 by the Kirkpatrick hierarchy for covering learners' views on the learning experience, its organization, presentation, content, teaching methods, and aspects of the instructional organization, materials, quality of instruction.23

The methodological quality of the studies varied. Seven studies used a contemporaneous control group, and 145 employed a noncontemporaneous comparison of hospitalists to nonhospitalists. Seven included studies reported the trainee response rate, which varied widely (from 54% to 100%) (Table 1). None of the studies reported whether any other educational interventions that could have biased study results were implemented during the study period. Of the 6 published studies, the strength of the data for 5 studies was rated as a 2 or 3 and for 1 the strength was rated a 4 (Table 1).

Trainee Evaluations Comparing Hospitalists to All Nonhospitalists

The most commonly evaluated attending measures included trainees' overall satisfaction with attendings (n = 8 studies),4451 trainees' ratings of teaching effectiveness (n = 5 studies),44, 46, 47, 49, 50 attending effectiveness of feedback delivery (n = 4 studies),4548 trainees' perceptions of attending knowledge (n = 3 studies),45, 47, 48 and attending involvement of trainees in patient care decisions (n = 3 studies) (Table 2).44, 45, 47 Several other outcomes were reported in 2 or fewer studies (Table 3). All studies reported nonnormally distributed evaluation ratings, with trainee ratings of all attending groups skewed toward high ratings.

Trainee Ratings of Attending Teaching
Number of Studies Evaluated Hospitalists Better Nonhospitalists Better No Difference
  • NOTE: Studies that achieved statistical significant in demonstrating increased trainee satisfaction for each domain are listed in each attending group's column.

  • Hospitalists compared to subspecialists.

  • Hospitalists compared to general internists.

Overall rating of attending 8 44‐46, 47*, 48‐51 47
Teaching effectiveness 5 44, 48‐50 46
Feedback delivery 4 45, 47*, 48 47 46
Involvement of trainees in patient care decisions 3 45, 48 44
Quality of ward rounds 2 44, 49
Effectiveness as a role model 2 45, 48
Communication of rotation goals 1 46
Emphasizes evidence‐based care 1 48
Emphasizes cost‐effective care 1 47
Availability 2 45 48
Perceived knowledge 3 45, 48 47
Bedside teaching 1 45
Apparent interest in psychosocial aspects of care 1 47* 47
Results of Studies Evaluating Hospitalists vs. Nonhospitalists
Reference Citation, Location, Year Study Design Major Findings Data Strength
  • Meeting abstracts.

  • Brigham & Womens Hospitals University of California‐San Fransisco, University of Chicago, University of Washington, University of Illinois, University of New Mexico.

  • NOTE: Shows the individual study results for outcomes measured in 3 or more studies.

  • Abbreviations: CI, confidence interval, MS, medical student; PGC, postgraduate year; SD, standard deviation.

Chung et al.,44 University of Chicago, 2002 Retrospective, quasirandomized with contemporaneous controls % of Internal Medicine house staff very satisfied with Internal Medicine attendings (5‐point scale, 5 = very satisfied): End of month: hospitalist 58%, nonhospitalist 39%; end of year: hospitalists 76%, nonhospitalists 48%. Compared to residents who did not work with hospitalists, residents with experience with hospitalists had fewer concerns about loss of autonomy (8% vs. 41%, P = 0.02), and no difference in concerns about exposure to different faculty (41% vs. 60%, P = 0.08) 2
Landrigan et al.,45 Children's Hospital, Boston, 2002 Retrospective, single group with historical control Overall satisfaction with inpatient experience (4‐point scale, 4 = extremely satisfied): interns, 3.5 with hospitalists, 3.2 with nonhospitalists. PGY3, 3.5 with hospitalists, 3.5 with nonhospitalists. Rating of teaching effectiveness (5‐point scale, 5 = excellent): hospitalists 4.7, nonhospitalists 4.4. PGY3s reported less ability to make decisions independently, less ability to supervise with hospitalist attendings, but differences did not meet statistical significance (P = 0.07). 2
Hunter et al.,46 Oregon Health & Sciences, 2004 Retrospective, quasirandomized with contemporaneous controls MS3 combined overall rating of attending during Internal Medicine clerkship (9‐point scale, 9 = outstanding): hospitalists 8.56, nonhospitalists 8.22. Combined rating was a composite of 7 parameters (communication of rotation goals, establishing learning climate, use of educational time, teaching style, evaluation and feedback, contribution to growth and development, and effectiveness as clinical teacher). 3
Hauer et al.,47 University of California, San Francisco, 2004 Retrospective, quasirandomized with contemporaneous controls Internal medicine house staff, MS4 and MS3 overall satisfaction with Internal Medicine attending (9‐point scale, 9 = excellent): hospitalists 8.3 (SD 0.9), nonhospitalist general internists 7.9 (SD 1.3), subspecialists 8.1 (SD 1.7); P = 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.20 for comparison of hospitalists vs. subspecialists. Attending teaching effectiveness (5‐point scale, 5 = excellent): hospitalists 4.8 (SD 0.6), general internists 4.5 (SD 0.8), specialists 4.5 (SD 1.1); P < 0.001 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.03 for comparison of hospitalists vs. subspecialists. Attending knowledge (9‐point scale): hospitalists 8.2 (SD 1.1), nonhospitalists 7.9 (SD 1.2), subspecialists 8.1 (SD 1.5); P < 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.10 for comparison of hospitalists vs. subspecialists. Attending valuation of trainee opinions (9‐point scale): hospitalists 8.3 (SD 0.9), nonhospitalist generalists 8.2 (SD 1.3), subspecialists 8.1 (SD 1.7); P = 0.20 for comparison of hospitalists vs. nonhospitalist generalists; P = 0.60 for comparison of hospitalist vs. subspecialists. Provision of feedback (9‐point scale): hospitalists 7.9 (SD 1.6), nonhospitalist generalists 7.2 (SD 2.3), subspecialists 7.0 (SD 2.5); P < 0.01 for comparison of hospitalists vs. nonhospitalist generalists, P = 0.01 for comparison of hospitalists vs. subspecialists. 4
Kripalani et al.,48 Grady Memorial, 2004 Retrospective, quasirandomized with contemporaneous controls Internal medicine house staff, MS4 and MS3 satisfaction with Internal Medicine attending teaching effectiveness (25‐item McGill Clinical Tutor Evaluation, maximum score 150): hospitalists 134.5 (95% CI, 130.2‐138.8), general internists 135.0 (95% CI, 131.2‐138.8), specialists 126.3 (95% CI, 120.4‐132.1). 3
Geskey and Kees‐Folts,49 Penn State Children's Hospital, 2007 Retrospective, quasirandomized with contemporaneous controls MS3 overall satisfaction with Pediatric attending teaching (4‐point scale, 4 = excellent), hospitalists 3.9, nonhospitalists 3.0. MS3s rated hospitalists higher than nonhospitalists in all 4 attending characteristics measured: teaching effectiveness, effectiveness as a pediatrician, student advocacy effectiveness, and overall. 3
Arora et al.,50 Multiple sites, 2005*, Retrospective, quasirandomized with contemporaneous controls MS3 overall satisfaction with Internal Medicine clerkship (5‐point scale, 5 = very satisfied): hospitalists 4.5, nonhospitalists 4.3. Trends toward greater emphasis on education (P = 0.07) and higher quality attending rounds (P = 0.07) with hospitalists. Effects of hospitalists on resident perceptions of autonomy not reported. 2
Chintharajah and Aronowitz,51 California Pacific Medical Center, 2006* Retrospective, with contemporaneous controls. Method of assignment to attending type not stated. Internal Medicine house staff ratings of Internal Medicine attendings: Using a 9‐point scale in 1998‐2002, then 5‐point scale in 2003‐2005, Hospitalists were rated higher than nonhospitalists in all areas assessed in 1998‐2002, but were rated higher in only 3 areas in 2003‐2005 (accessibility, feedback, and teaching procedures.) Data not shown. 1

Of the 8 studies comparing hospitalists to all nonhospitalists, trainees were statistically significantly more satisfied with hospitalists in all but 1 (Table 3).4451 Hospitalists' overall teaching effectiveness was rated significantly higher in 4 studies,44, 47, 49, 50 but 1 did not demonstrate a difference.46 Hospitalists were also rated higher at feedback delivery compared to all nonhospitalists, with 2 studies45, 47 and 1 abstract reporting hospitalists' superiority. One other study showed increased satisfaction with hospitalists' feedback only compared to subspecialists.48 Hospitalists were perceived as being more knowledgeable and allowing greater trainee involvement in patient care decisions, in 2 of 3 studies addressing each of these questions. In order to evaluate preconceived notions, 1 study demonstrated that residents who never worked with hospitalists were significantly more concerned about hospitalists negatively impacting their clinical autonomy than residents who had worked with hospitalists at least once.44

Hospitalists were rated as more available in 1 study45 with a trend toward more availability in another.47 Trainee satisfaction was higher with hospitalists on other measures including quality of ward rounds,44, 49 effectiveness as a role model,45, 48 communication of rotations' goals,46 emphasis on evidence‐based medicine,48 and emphasis on cost‐effective care.47 In 1 study, trainees were significantly more satisfied with the bedside teaching of nonhospitalists.45 In another, trainees felt that, compared to hospitalists, general internists seemed to be more interested in the psychosocial aspects of patients' care.48

Trainee Evaluations Comparing Hospitalists to Outpatient Generalists and Subspecialists

Of the studies that examined whether the type of nonhospitalist (general internist vs. subspecialist) impacted trainee ratings, 1 showed that trainees were equally satisfied with hospitalists and general internists but that general internists were rated higher than hospitalists for feedback delivery.48 Hospitalists were rated significantly higher than subspecialists overall and for feedback delivery.48 The other study that subclassified nonhospitalists into general internists and subspecialists showed that hospitalists were more highly rated than both general internists and subspecialists overall and for teaching effectiveness and feedback delivery.47

DISCUSSION

This systematic review of the literature describing hospitalists as educators shows that trainees are generally more satisfied with hospitalists than nonhospitalists on their inpatient rotations. Hospitalists were rated more highly than traditional ward attendings overall, and for teaching effectiveness44, 47, 49, 50 and feedback delivery.45, 47 Limited data (3 studies each) indicates that trainees perceive hospitalists as being at least as knowledgeable as traditional attendings, and encouraging similar levels of trainee involvement in patient care decisions. Trainees may be more satisfied with hospitalists than with general internists or subspecialists, although some comparisons have shown that general internists may be preferred. No studies have evaluated the impact of hospitalists on trainee outcomes beyond satisfaction, such as knowledge acquisition, rotation grades, or clinical performance.

Our review suggests that, with increased time spent on the wards, hospitalists exhibit attributes consistent with specialization in inpatient care.1, 14 Hospitalists were noted to emphasize cost‐effectiveness47 and evidence‐based medicine48 and to conduct higher‐quality ward rounds.44, 49 Hospitalists are uniquely qualified to teach about inpatient goals and processes such as decreasing length of stay in the hospital and cost‐effective care.1, 3, 7, 12, 15 Trainees see hospitalists as role models,45, 47 and the site‐defined nature of hospital medicine promotes trainees' access to hospitalist attendings. Such accessibility has been described as an independent attribute of excellent physician role models,59, 60, 62 Our findings from our methodologically rigorous systematic review of the literature extend the conclusions of a narrative review of the literature on hospitalists as educators that also identified favorable ratings of hospitalists, with some unresolved concerns about resident autonomy and the role of subspecialist teachers in hospitalist systems.63

Diminished trainee autonomy was an early concern about hospitalists in academic medical centers.16, 20, 21 In the earliest study we identified that assessed autonomy, trainees perceived similar amounts of autonomy with hospitalists compared to nonhospitalists.44 Interestingly, house staff in more experienced hospitalist models even described experiencing increased involvement in patient care when supervised by hospitalist attendings in both the pediatric and internal medicine settings.45, 47 Hospitalists might also generate more clinical diversity for house staff by reducing length of stay and thereby enhancing opportunities for learning with newly admitted patients.13, 14, 64

The studies that did not demonstrate increased satisfaction with hospitalists may be instructive as well. One negative study46 reported results from a program that instituted the hospitalist model in response to declining trainee satisfaction. With an emphasis on improving the educational experience, nonhospitalist physicians who were already rated highly as teachers were also selected to attend on the wards. Nonetheless, trainees still were more satisfied with hospitalists overall. One study showed that hospitalists were rated more highly than subspecialists when delivering feedback but less so than general internists.47 The authors suggest that their general internists may have been at a more optimum demographic by being a few more years out of training; such correlations of age and rank to evaluations have not been previously described.60, 61

The disadvantages of hospitalists in trainee education identified by this systematic review include the quality of bedside teaching in one study45 and interest in psychosocial aspects of care in another48 compared to general internists. The decline in satisfaction with bedside teaching is a concern but the comparison was noncontemporaneous and the authors explained that the team size increased and resulted in an overall decrease in time at the bedside.45 The concern that decreased patient length of stays may translate to less time spent with patients and less bedside teaching is not new.18 Although hospitalists have shown particular educational advantages, the balance of clinical efficiency and education remains challenging. Trainees' perception that hospitalists were less interested in the psychosocial aspects of care compared to general internists48 was also anticipated when inpatient attending models began to shift, because hospitalization may now be viewed by trainees as discontinuous from a patient's outpatient care and social situation.18 Nevertheless, hospitalists have been able to achieve such quality measures as decreased length of stay without decreasing patient satisfaction.10, 12

Our study has several limitations. First, all attendings were rated highly in all studies. These high ratings are commonly seen with educational evaluations,65 and this phenomenon creates a ceiling effect that limits variability within the group. Nevertheless, trainees rated hospitalists significantly higher than nonhospitalists overall in all of the included studies. The impact of these small but significant differences on trainees' learning and future clinical performance is unknown. Additionally, the distinction between hospitalists and nonhospitalists was not universal. Initially, it was proposed that academic hospitalists work as hospitalists 3 to 6 months each year.1 This definition is sustained through almost all included studies that reported attending time on the wards, with hospitalists working 3 to 7 months and nonhospitalists working less than 3 months, but observed variability does not permit a universal hospitalist definition. It is possible that publication bias influenced our findings toward positive ratings of hospitalists; we reviewed and included meeting abstracts to minimize this bias. We did not review family medicine meeting abstracts.

The included studies had some methodologic strengths, including quasirandom assignment of trainees and use of a contemporaneous control group in almost all studies. However, the overall methodologic strength was fair given limitations in response rates and reporting of cointerventions; we thus considered most studies to represent trends rather than definitive results. Finally, all of the studies meeting our inclusion criteria to date only evaluated trainees' attitudes and beliefs. Because knowledge and skills were not objectively assessed, it is unclear how increased trainee satisfaction translates to knowledge and skill acquisition on the wards. However, Miller's pyramid and its proposed modification, the Cambridge model, suggest that targeting attitudes precedes knowledge acquisition,66 and our study suggests the need for a research agenda examining the impact of hospitalists on trainees' future performance. Griffith et al.67 demonstrated an association between increased satisfaction with teaching and medical students' performance on clerkship examinations and the U.S. Medical Licensing Examination (USMLE) Step 2.

Overall, trainees were more satisfied with hospitalists' teaching and feedback delivery. Our literature search shows that, although there are a limited number of studies of varying level of quality that cannot be compared using meta‐analytic techniques, the currently available data suggests that hospitalists lead to improved learner satisfaction. More studies to delineate the differences between hospitalists and nonhospitalist general internists are needed. Continued exploration of the effects of attending age and rank on trainee learning may help determine whether this effect is reproducible, and what facets of attendings' teaching actually impact trainees' knowledge, skill acquisition, and behaviors. Since all studies only evaluated attitudes, studies analyzing knowledge and skills are required to more fully understand the educational outcomes of the hospitalist model.

References
  1. Wachter RM, Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335:514517.
  2. Society of Hospital Medicine. Definition of a Hospitalist. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=General_ Information130:343349.
  3. Society of Hospital Medicine. Hospital Medicine Specialty Shows 20 Percent Growth. Available at: http://www.hospitalmedicine.org/AM/Template. cfm?Section=Press_Releases21:10791085.
  4. Kralovec PD, Miller JA, Wellikson L, Huddleston JM.The status of hospital medicine groups in the United States.J Hosp Med.2006;1:7580.
  5. Brown MD, Halpert A, McKean S, Sussman A, Dzau VJ.Assessing the value of hospitalists to academic health centers: Brigham and Women's Hospital and Harvard Medical School.Am J Med.1999;106:134137.
  6. Wachter RM, Katz P, Showstack J, Bindman AB, Goldman L.Reorganizing an academic medical service. Impact on cost, quality, patient satisfaction, and education.JAMA.1998;279:15601565.
  7. Wachter RM, Goldman L.Implications of the hospitalist movement for academic departments of medicine: lessons from the UCSF experience.Am J Med.1999;106:127133.
  8. Davis KM, Koch KE, Harvey JK, et al.Effects of hospitalists on cost, outcomes, and patient satisfaction in a rural health system.Am J Med.2000;108:621626.
  9. Craig DE, Hartka L, Likosky WH, et al.Implementation of a hospitalist system in a large health maintenance organization: the Kaiser Permanente experience.Ann Intern Med.1999;130:355359.
  10. Halpert AP, Pearson SD, LeWine HE, McKean SC.The impact of an inpatient physician program on quality, utilization, and satisfaction.Am J Manag Care.2000;6:549555.
  11. Meltzer DO, Shah MN, Morrison J.Decreased length of stay, costs and mortality in a randomized trial of academic hospitalists.J Gen Intern Med.2001;16:S208.
  12. Auerbach AD, Wachter RM, Katz P, Showstack J, Baron RB, Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137(11):859865.
  13. Lindenauer PK, Rothberg MB, Pekow PS, Kenwood C, Benjamin EM, Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;357(25):25892600.
  14. Goldman L.The impact of hospitalists on medical education and the academic health system.Ann Intern Med.1999;130:364367.
  15. Whitcomb WF, Nelson JR.The role of hospitalists in medical education.Am J Med.1999;107:305309.
  16. Hauer KE, Wachter RM.Implications of the hospitalist model for medical students' education.Acad Med.2001;76:324330.
  17. Haftel HM, Bozynski ME.Changing teaching for changing times: the effect of a hospitalist program on the education of students.Acad Med.2000;75:521.
  18. Wachter RM.Reflections: the hospitalist movement a decade later.J Hosp Med.2006;1(4):248252.
  19. Hollander H.Response to the effect of hospitalist systems on residency education: re‐incorporating medical subspecialists.Acad Med.2001;76:555556.
  20. Best Evidence Medical Education (BEME) Collaboration, Dundee, UK. Home page. Available at: http://www.bemecollaboration.org. Accessed May2009.
  21. Kirkpatrick DL.Evaluation of Training. In: Craig R, Mittel I, eds.Training and Development Handbook.New York:McGraw‐Hill;1967:87112.
  22. Kulaga ME, Charney P, O'Mahony SP, et al.The positive impact of initiation of hospitalist clinician educators.J Gen Intern Med.2004;19(4):293301.
  23. Dwight P, MacArthur C, Friedman JN, Parkin PC.Evaluation of a staff‐only hospitalist system in a tertiary care, academic children's hospital.Pediatrics.2004;114(6):15451549.
  24. Homme JH.How pediatric hospitalist programs can affect graduate medical education.Pediatr Ann.2003;32(12):822824.
  25. Marinella MA.A “hospitalist” rotation increases short‐term knowledge of fourth‐year medical students.South Med J.2002;95(3):374.
  26. Wachter RM.The hospitalist movement 10 years later: life as a Swiss army knife.MedGenMed.2006;8(3):30.
  27. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM.Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out.J Hosp Med.2006;1(4):257266.
  28. Pressel DM.Hospitalists in medical education: coming to an academic medical center near you.J Natl Med Assoc.2006;98(9):15011504.
  29. Abbo ED, Volandes AE.Teaching residents to consider costs in medical decision making.Am J Bioeth.2006;6(4):3334.
  30. Association of Program Directors in Internal Medicine;Fitzgibbons JP, Bordley DR, Berkowitz LR, Miller BW, Henderson MC.Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920926.
  31. Ranji SR, Rosenman DJ, Amin AN, Kripalani S.Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1e7.
  32. Wilson SD.Employing hospitalists to improve residents' inpatient learning.Acad Med.2001;76(5):556.
  33. Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV.The spectrum of community‐based hospitalist practice: a call to tailor internal medicine residency training.Arch Intern Med.2007;167(7):727728.
  34. McKean SC, Budnitz TL, Dressler DD, Amin AN, Pistoria MJ.How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med.2006;1(suppl 1):5767.
  35. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(suppl 1):4856.
  36. O'Leary KJ, Liebovitz DM, Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1(2):8893.
  37. Kingston M.Determining the professional attributes of a hospitalist: experience in one Australian metropolitan hospital.Intern Med J.2005;35(5):305308.
  38. Mufson MA.The internal medicine clerkship: the view from the vantage point of one chair of medicine.Am J Med.1999;107(2):109111.
  39. Shea JA, Wasfi YS, Kovath KJ, Asch DA, Bellini LM.The presence of hospitalists in medical education.Acad Med.2000;75(10 suppl):S34S36.
  40. Dent AW, Crotty B, Cuddihy HL, et al.Learning opportunities for Australian prevocational hospital doctors: exposure, perceived quality and desired methods of learning.Med J Aust.2006;184(9):436440.
  41. Khera N, Stroobant J, Primhak RA, Gupta R, Davies H.Training the ideal hospital doctor: the specialist registrars' perspective.Med Educ.2001;35(10):957966.
  42. Chung P, Morrison J, Jin L, Levinson W, Humphrey H, Meltzer D.Resident satisfaction on an academic hospitalist service: time to teach.Am J Med.2002;112(7):597601.
  43. Landrigan CP, Muret‐Wagstaff S, Chiang VW, Nigrin DJ, Goldmann DA, Finkelstein JA.Effect of a pediatric hospitalist system on housestaff education and experience.Arch Pediatr Adolesc Med.2002;156(9):877883.
  44. Hunter AJ, Desai SS, Harrison RA, Chan BK.Medical student evaluation of the quality of hospitalist and nonhospitalist teaching faculty on inpatient medicine rotations.Acad Med.2004;79(1):7882.
  45. Hauer KE, Wachter RM, McCulloch CE, Woo GA, Auerbach AD.Effects of hospitalist attending physicians on trainee satisfaction with teaching and with internal medicine rotations.Arch Intern Med.2004;164(17):18661871.
  46. Kripalani S, Pope AC, Rask K, et al.Hospitalists as teachers.J Gen Intern Med.2004;19(1):815.
  47. Geskey JM, Kees‐Folts D.Third‐year medical students' evaluation of hospitalist and nonhospitalist faculty during the inpatient portion of their pediatrics clerkships.J Hosp Med.2007;2(1):1722.
  48. Arora V, Wetterneck T, Schnipper J, et al. The effects of hospitalist teaching attendings on medical student satisfaction and career interest: results from the multicenter hospitalist study. Society of Hospital Medicine;2005 Annual Meeting Abstracts.
  49. Chintharajah S, Aronowitz P. Hospitalist teachers may lose their superiority over non‐hospitalist teachers in “mature” hospitalist systems. Society of General Internal Medicine;2006 Annual Meeting Abstracts.
  50. Hunter A, Desai S, Harrison R, Chan B. Medical student evaluation of the quality of hospitalist and non‐hospitalist teaching faculty on inpatient medicine rotations. Society of Hospital Medicine;2003 Annual Meeting Abstracts.
  51. Hauer KE, Auerbach A, Woo GA, Wachter RM. Effects of hospitalist attendings on trainee satisfaction with rotations. Society of General Internal Medicine;2002 Annual Meeting Abstracts.
  52. Phy M, Rosenman D, Huddleston J. Internal medicine and orthopedic residents' perception of education and satisfaction after the initiation of a non‐resident hospitalist service. Society of Hospital Medicine;2004 Annual Meeting Abstracts.
  53. O'Leary K, Chadha V, Fleming V, Baker D. Medical subinternship: student experience on a resident uncovered hospitalist service. Society of Hospital Medicine;2006 Annual Meeting Abstracts.
  54. Hefner JE, Elnicki DM, Barnard K, Painter T, McNeil M. A randomized controlled trial to evaluate the effect of dedicated clinical teachers (or “Educationalists”) on the internal medicine clerkship experience. Society of General Internal Medicine;2002 Annual Meeting Abstracts.
  55. Marratta D, Rajan S, Novotny J. Internal medicine residency program goals drive the development of hospitalist programs at teaching hospitals. Society of Hospital Medicine;2002 Annual Meeting Abstracts.
  56. McKean S, Hafler J. The role of the hospitalist in teaching. Society of General Internal Medicine;2003 Annual Meeting Abstracts.
  57. McLeod PJ, James CA, Abrahamowicz M.Clinical tutor evaluation: a 5‐year study by students on an inpatient service and residents in an ambulatory care clinic.Med Educ.1993;27:4854.
  58. Wright SM, Kern DE, Kolodner K, Howard DM, Brancati FL.Attributes of excellent attending‐physician role models.N Engl J Med.1998;339:19861992.
  59. Irby DM, Gillmore GM, Ramsey PG.Factors affecting ratings of clinical teachers by medical students and residents.J Med Educ.1987;62:17.
  60. Kroenke K, Simmons JO, Copley JB, Smith C.Attending rounds: a survey of physician attitudes.J Gen Intern Med.1990;5:229233.
  61. Goldenberg J, Glasheen JJ.Hospitalist educators: future of inpatient internal medicine training.Mt Sinai J Med.2008;75:430435.
  62. Landrigan CP, Conway PH, Edwards S, Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics.2006;117:17361744.
  63. Speer AJ, Solomon DJ, Fincher RM.Grade inflation in internal medicine clerkships: results of a national survey.Teach Learn Med.2000;12:112116.
  64. Rethans JJ, Norcini JJ, Barón‐Maldonado M, et al.The relationship between competence and performance: implications for assessing practice performance.Med Educ.2002;36(10):901909.
  65. Griffith CH, Georgesen JC, Wilson JF.Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances.Acad Med.2000;75(10 suppl):S62S64.
References
  1. Wachter RM, Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335:514517.
  2. Society of Hospital Medicine. Definition of a Hospitalist. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=General_ Information130:343349.
  3. Society of Hospital Medicine. Hospital Medicine Specialty Shows 20 Percent Growth. Available at: http://www.hospitalmedicine.org/AM/Template. cfm?Section=Press_Releases21:10791085.
  4. Kralovec PD, Miller JA, Wellikson L, Huddleston JM.The status of hospital medicine groups in the United States.J Hosp Med.2006;1:7580.
  5. Brown MD, Halpert A, McKean S, Sussman A, Dzau VJ.Assessing the value of hospitalists to academic health centers: Brigham and Women's Hospital and Harvard Medical School.Am J Med.1999;106:134137.
  6. Wachter RM, Katz P, Showstack J, Bindman AB, Goldman L.Reorganizing an academic medical service. Impact on cost, quality, patient satisfaction, and education.JAMA.1998;279:15601565.
  7. Wachter RM, Goldman L.Implications of the hospitalist movement for academic departments of medicine: lessons from the UCSF experience.Am J Med.1999;106:127133.
  8. Davis KM, Koch KE, Harvey JK, et al.Effects of hospitalists on cost, outcomes, and patient satisfaction in a rural health system.Am J Med.2000;108:621626.
  9. Craig DE, Hartka L, Likosky WH, et al.Implementation of a hospitalist system in a large health maintenance organization: the Kaiser Permanente experience.Ann Intern Med.1999;130:355359.
  10. Halpert AP, Pearson SD, LeWine HE, McKean SC.The impact of an inpatient physician program on quality, utilization, and satisfaction.Am J Manag Care.2000;6:549555.
  11. Meltzer DO, Shah MN, Morrison J.Decreased length of stay, costs and mortality in a randomized trial of academic hospitalists.J Gen Intern Med.2001;16:S208.
  12. Auerbach AD, Wachter RM, Katz P, Showstack J, Baron RB, Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137(11):859865.
  13. Lindenauer PK, Rothberg MB, Pekow PS, Kenwood C, Benjamin EM, Auerbach AD.Outcomes of care by hospitalists, general internists, and family physicians.N Engl J Med.2007;357(25):25892600.
  14. Goldman L.The impact of hospitalists on medical education and the academic health system.Ann Intern Med.1999;130:364367.
  15. Whitcomb WF, Nelson JR.The role of hospitalists in medical education.Am J Med.1999;107:305309.
  16. Hauer KE, Wachter RM.Implications of the hospitalist model for medical students' education.Acad Med.2001;76:324330.
  17. Haftel HM, Bozynski ME.Changing teaching for changing times: the effect of a hospitalist program on the education of students.Acad Med.2000;75:521.
  18. Wachter RM.Reflections: the hospitalist movement a decade later.J Hosp Med.2006;1(4):248252.
  19. Hollander H.Response to the effect of hospitalist systems on residency education: re‐incorporating medical subspecialists.Acad Med.2001;76:555556.
  20. Best Evidence Medical Education (BEME) Collaboration, Dundee, UK. Home page. Available at: http://www.bemecollaboration.org. Accessed May2009.
  21. Kirkpatrick DL.Evaluation of Training. In: Craig R, Mittel I, eds.Training and Development Handbook.New York:McGraw‐Hill;1967:87112.
  22. Kulaga ME, Charney P, O'Mahony SP, et al.The positive impact of initiation of hospitalist clinician educators.J Gen Intern Med.2004;19(4):293301.
  23. Dwight P, MacArthur C, Friedman JN, Parkin PC.Evaluation of a staff‐only hospitalist system in a tertiary care, academic children's hospital.Pediatrics.2004;114(6):15451549.
  24. Homme JH.How pediatric hospitalist programs can affect graduate medical education.Pediatr Ann.2003;32(12):822824.
  25. Marinella MA.A “hospitalist” rotation increases short‐term knowledge of fourth‐year medical students.South Med J.2002;95(3):374.
  26. Wachter RM.The hospitalist movement 10 years later: life as a Swiss army knife.MedGenMed.2006;8(3):30.
  27. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM.Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out.J Hosp Med.2006;1(4):257266.
  28. Pressel DM.Hospitalists in medical education: coming to an academic medical center near you.J Natl Med Assoc.2006;98(9):15011504.
  29. Abbo ED, Volandes AE.Teaching residents to consider costs in medical decision making.Am J Bioeth.2006;6(4):3334.
  30. Association of Program Directors in Internal Medicine;Fitzgibbons JP, Bordley DR, Berkowitz LR, Miller BW, Henderson MC.Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine.Ann Intern Med.2006;144(12):920926.
  31. Ranji SR, Rosenman DJ, Amin AN, Kripalani S.Hospital medicine fellowships: works in progress.Am J Med.2006;119(1):72.e1e7.
  32. Wilson SD.Employing hospitalists to improve residents' inpatient learning.Acad Med.2001;76(5):556.
  33. Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV.The spectrum of community‐based hospitalist practice: a call to tailor internal medicine residency training.Arch Intern Med.2007;167(7):727728.
  34. McKean SC, Budnitz TL, Dressler DD, Amin AN, Pistoria MJ.How to use the core competencies in hospital medicine: a framework for curriculum development.J Hosp Med.2006;1(suppl 1):5767.
  35. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1(suppl 1):4856.
  36. O'Leary KJ, Liebovitz DM, Baker DW.How hospitalists spend their time: insights on efficiency and safety.J Hosp Med.2006;1(2):8893.
  37. Kingston M.Determining the professional attributes of a hospitalist: experience in one Australian metropolitan hospital.Intern Med J.2005;35(5):305308.
  38. Mufson MA.The internal medicine clerkship: the view from the vantage point of one chair of medicine.Am J Med.1999;107(2):109111.
  39. Shea JA, Wasfi YS, Kovath KJ, Asch DA, Bellini LM.The presence of hospitalists in medical education.Acad Med.2000;75(10 suppl):S34S36.
  40. Dent AW, Crotty B, Cuddihy HL, et al.Learning opportunities for Australian prevocational hospital doctors: exposure, perceived quality and desired methods of learning.Med J Aust.2006;184(9):436440.
  41. Khera N, Stroobant J, Primhak RA, Gupta R, Davies H.Training the ideal hospital doctor: the specialist registrars' perspective.Med Educ.2001;35(10):957966.
  42. Chung P, Morrison J, Jin L, Levinson W, Humphrey H, Meltzer D.Resident satisfaction on an academic hospitalist service: time to teach.Am J Med.2002;112(7):597601.
  43. Landrigan CP, Muret‐Wagstaff S, Chiang VW, Nigrin DJ, Goldmann DA, Finkelstein JA.Effect of a pediatric hospitalist system on housestaff education and experience.Arch Pediatr Adolesc Med.2002;156(9):877883.
  44. Hunter AJ, Desai SS, Harrison RA, Chan BK.Medical student evaluation of the quality of hospitalist and nonhospitalist teaching faculty on inpatient medicine rotations.Acad Med.2004;79(1):7882.
  45. Hauer KE, Wachter RM, McCulloch CE, Woo GA, Auerbach AD.Effects of hospitalist attending physicians on trainee satisfaction with teaching and with internal medicine rotations.Arch Intern Med.2004;164(17):18661871.
  46. Kripalani S, Pope AC, Rask K, et al.Hospitalists as teachers.J Gen Intern Med.2004;19(1):815.
  47. Geskey JM, Kees‐Folts D.Third‐year medical students' evaluation of hospitalist and nonhospitalist faculty during the inpatient portion of their pediatrics clerkships.J Hosp Med.2007;2(1):1722.
  48. Arora V, Wetterneck T, Schnipper J, et al. The effects of hospitalist teaching attendings on medical student satisfaction and career interest: results from the multicenter hospitalist study. Society of Hospital Medicine;2005 Annual Meeting Abstracts.
  49. Chintharajah S, Aronowitz P. Hospitalist teachers may lose their superiority over non‐hospitalist teachers in “mature” hospitalist systems. Society of General Internal Medicine;2006 Annual Meeting Abstracts.
  50. Hunter A, Desai S, Harrison R, Chan B. Medical student evaluation of the quality of hospitalist and non‐hospitalist teaching faculty on inpatient medicine rotations. Society of Hospital Medicine;2003 Annual Meeting Abstracts.
  51. Hauer KE, Auerbach A, Woo GA, Wachter RM. Effects of hospitalist attendings on trainee satisfaction with rotations. Society of General Internal Medicine;2002 Annual Meeting Abstracts.
  52. Phy M, Rosenman D, Huddleston J. Internal medicine and orthopedic residents' perception of education and satisfaction after the initiation of a non‐resident hospitalist service. Society of Hospital Medicine;2004 Annual Meeting Abstracts.
  53. O'Leary K, Chadha V, Fleming V, Baker D. Medical subinternship: student experience on a resident uncovered hospitalist service. Society of Hospital Medicine;2006 Annual Meeting Abstracts.
  54. Hefner JE, Elnicki DM, Barnard K, Painter T, McNeil M. A randomized controlled trial to evaluate the effect of dedicated clinical teachers (or “Educationalists”) on the internal medicine clerkship experience. Society of General Internal Medicine;2002 Annual Meeting Abstracts.
  55. Marratta D, Rajan S, Novotny J. Internal medicine residency program goals drive the development of hospitalist programs at teaching hospitals. Society of Hospital Medicine;2002 Annual Meeting Abstracts.
  56. McKean S, Hafler J. The role of the hospitalist in teaching. Society of General Internal Medicine;2003 Annual Meeting Abstracts.
  57. McLeod PJ, James CA, Abrahamowicz M.Clinical tutor evaluation: a 5‐year study by students on an inpatient service and residents in an ambulatory care clinic.Med Educ.1993;27:4854.
  58. Wright SM, Kern DE, Kolodner K, Howard DM, Brancati FL.Attributes of excellent attending‐physician role models.N Engl J Med.1998;339:19861992.
  59. Irby DM, Gillmore GM, Ramsey PG.Factors affecting ratings of clinical teachers by medical students and residents.J Med Educ.1987;62:17.
  60. Kroenke K, Simmons JO, Copley JB, Smith C.Attending rounds: a survey of physician attitudes.J Gen Intern Med.1990;5:229233.
  61. Goldenberg J, Glasheen JJ.Hospitalist educators: future of inpatient internal medicine training.Mt Sinai J Med.2008;75:430435.
  62. Landrigan CP, Conway PH, Edwards S, Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics.2006;117:17361744.
  63. Speer AJ, Solomon DJ, Fincher RM.Grade inflation in internal medicine clerkships: results of a national survey.Teach Learn Med.2000;12:112116.
  64. Rethans JJ, Norcini JJ, Barón‐Maldonado M, et al.The relationship between competence and performance: implications for assessing practice performance.Med Educ.2002;36(10):901909.
  65. Griffith CH, Georgesen JC, Wilson JF.Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances.Acad Med.2000;75(10 suppl):S62S64.
Issue
Journal of Hospital Medicine - 4(8)
Issue
Journal of Hospital Medicine - 4(8)
Page Number
490-498
Page Number
490-498
Publications
Publications
Article Type
Display Headline
Effect of hospitalist attending physicians on trainee educational experiences: A systematic review
Display Headline
Effect of hospitalist attending physicians on trainee educational experiences: A systematic review
Legacy Keywords
clinical clerkship/methods, hospitalist, hospital teaching, internship methods, program evaluation, residency/methods
Legacy Keywords
clinical clerkship/methods, hospitalist, hospital teaching, internship methods, program evaluation, residency/methods
Sections
Article Source
Copyright © 2009 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Professor of Clinical Medicine, 533 Parnassus Avenue, Box 0131, Department of Medicine, University of California, San Francisco, San Francisco, CA 94143
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Systematic Review of Rapid Response Systems

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Effects of rapid response systems on clinical outcomes: Systematic review and meta‐analysis

A medical emergency team1 is a group of clinicians trained to quickly assess and treat hospitalized patients showing acute signs of clinical deterioration. Equivalent terms used are rapid response team,2 critical care outreach team,3 and patient‐at‐risk team.4 A consensus panel5 recently endorsed use of the term rapid response system (RRS) to denote any system that uses a standard set of clinical criteria to summon caregivers to the bedside of a patient who is deemed unstable but not in cardiopulmonary arrest (in which case a standard resuscitation team would be summoned). Such teams primarily evaluate patients on general hospital wards.

RRSs have been developed in response to data indicating that patients frequently demonstrate premonitory signs or receive inadequate care prior to unanticipated intensive care unit (ICU) admission, cardiopulmonary arrest, or death outside the ICU.614 Earlier identification and treatment of such patients could prevent adverse clinical outcomes. The structure of RRSs varies but generally includes a physician and nurse and may also include other staff such as respiratory therapists.5 Teams are summoned by hospital staff to assess patients meeting specific clinical criteria (see box) about whom the bedside staff has significant concern.150

Example of Rapid Response System Calling Criteria for Adult Patients
Any staff member may call the team if 1 of the following criteria is met:
Heart rate > 140/min or < 40/min
Respiratory rate > 28/min or < 8/min
Systolic blood pressure > 180 mmHg or < 90 mm Hg
Oxygen saturation < 90% despite supplementation
Acute change in mental status
Urine output < 50 cc over 4 hours
Staff member has significant concern about patient's condition
Additional criteria used at some institutions:
Chest pain unrelieved by nitroglycerin
Threatened airway
Seizure
Uncontrolled pain

Initial studies of RRSs, performed primarily in Australia and the United Kingdom, showed promising reductions in unanticipated ICU admissions, cardiac arrests, and even overall inpatient mortality.1, 16, 17 The considerable enthusiasm generated by these studies18, 19 resulted in the Institute for Healthcare Improvement (IHI) incorporating RRSs into its 100,000 Lives campaign,2 and RRSs are now being implemented in the more than 3000 U.S. hospitals that joined the campaign. However, a recent commentary on rapid response teams20 and a systematic review of critical care outreach teams21 have raised concerns that this widespread implementation may not be justified by the available evidence. We performed a systematic review of studies of all variations of RRSs in order to determine their effect on patient outcomes and to characterize variations in their organization and implementation.

METHODS

Literature Search and Inclusion and Exclusion Criteria

We systematically searched MEDLINE, CINAHL, and BIOSIS through August 2006 for relevant studies using the various terms for RRSs (eg, medical emergency team, rapid response team, critical care outreach) and medical subject headings relevant to inpatient care and critical illness (eg, patient care team and resuscitation; the full search strategy is given in the Appendix). We also reviewed the abstract lists from the 2004 and 2005 American Thoracic Society and Society of Critical Care Medicine annual meetings and scanned reference lists from key articles.

We screened the abstracts of the articles identified by the search, and 2 independent reviewers abstracted potentially relevant articles using a standardized data abstraction form. Disagreements between the reviewers were resolved by consensus and, if necessary, discussion with a third reviewer. We included randomized controlled trials (RCTs), controlled before‐after studies, and interrupted time series, including simple before‐after studies with no contemporaneous control group, though we planned to separately analyze data from controlled studies if possible. We included only English‐language articles.

On the basis of RRS features in widely cited articles2224 and the recommendations of a recent consensus statement,5 we defined an RRS as having the following characteristics: (1) its primary responsibility is to intervene whenever hospitalized patients become unstable before cardiopulmonary arrest occurs; (2) it must primarily provide care outside the ICU and emergency department; (3) specific clinical criteria must be in place that define instability and trigger a call to the team; and (4) it must be expected to respond within a specified time. We defined these criteria in order to distinguish studies of RRSs from studies of cardiac arrest (code blue) teams or traditional consulting services.

To be included in the analysis, articles had to report the effects of a rapid response system on at least 1 of these outcomes: inpatient mortality, inpatient cardiac arrest, or unscheduled ICU transfer. We used the definitions of cardiac arrest and unscheduled ICU transfer given in the primary studies. In addition to these outcomes, we abstracted information on the number of admissions and the number of RRS calls during the study period. To maximize the comparability of study outcomes, we calculated the rates of mortality, cardiac arrest, unscheduled ICU transfer, and RRS calls per 1000 admissions for studies that did not supply data in this fashion.

Assessment of Study Quality

Quality scoring instruments for studies included in systematic reviews generally focus on randomized controlled trials, which we anticipated would account for a minority of included studies. On the basis of recommendations for the assessment of methodology for nonrandomized study designs,25, 26 we identified and abstracted 4 important determinants of internal validity (Table 1). The consensus statement5 recommends monitoring the effectiveness of RRSs by measuring the rate of unscheduled ICU admissions (defined as an unplanned admission to the ICU from a general ward27) and cardiac arrests of patients who were not listed as do not resuscitate (DNR). As the definition of unscheduled ICU admission allows room for subjectivity, we considered the blinding of assessment of this outcome to study group assignment to be important, especially for retrospective studies. Measurement of cardiac arrests should be less susceptible to blinding issues, but one of the functions of an RRS can be to initiate discussions that result in changes in the goals of care and code status.22 Thus, excluding patients made DNR by the team from cardiac arrest calculations could falsely lower the cardiac arrest rate.

Study Quality Criteria
Quality measures
  • Elements affecting study internal validity and translatability. These elements were chosen based on the methods of the Cochrane collaboration.25 These criteria were not used to determine article inclusion or exclusion.

A. Internal validity
1. Did the study have a contemporaneous control group?
2. If there was no contemporaneous control group, did the study report data for more than 1 time point before and after the intervention?
3. Were nonobjective primary outcomes (eg, unplanned ICU transfer) measured in a blinded fashion?
4. Were patients made DNR by the RRS included in calculations of the cardiac arrest and mortality rates?
B. Generalizability
5. Was the intervention performed independent of other quality improvement interventions targeting the care of critically ill patients?
6. Did the study report the number of admissions and RRS calls during the study period?
7. Did the study report the availability of intensivists before and after the intervention?

We also abstracted 3 separate elements of study quality pertaining to the external validity or generalizability of included studies (Table 1). These elements were defined a priori by consensus reached by discussion among the reviewers. These elements were intended to provide a framework for interpreting the included studies and to guide subgroup analyses. They were not used to form a composite quality score.

Statistical Analysis

We performed a random‐effects meta‐analysis to calculate summary risk ratios with 95% confidence intervals for the effects of RRSs on inpatient mortality, cardiopulmonary arrest, and unscheduled ICU admission. Included in the meta‐analysis were the studies that reported total number of admissions and incidence of each outcome before and after institution of the RRS. For randomized trials that reported pre‐ and postintervention data, we treated the intervention and control groups as separate trials in order to be able to compare their effects with the before‐after trials. For studies that reported results adjusted for clustering (ie, by hospital), we back‐calculated the unadjusted results by multiplying the standard error by the square of the design factor coefficient.28, 29 We calculated the I2 statistic to assess heterogeneity.30 All analyses were performed using Stata version 8.2 (Stata Corporation, College Station, TX).

RESULTS

The database searches identified 861 citations, and 1 additional prepublication study was supplied by the study's first author31; 89 articles underwent full‐text review (Fig. 1). Most studies excluded during the full‐text review did not meet our criteria for a study of an RRS or were observational studies or review articles. For instance, Sebat et al.32 published a study of a shock team at a community hospital that intervened when any patient suffered nontraumatic shock; the study did not meet our inclusion criteria as all patients were admitted to the ICU, most directly from the emergency department. Another frequently cited study, by Bristow et al.,16 was excluded as it was a case‐control study. Thirteen studies,3, 2224, 31, 334011 full‐length studies and 2 abstractsmet all criteria for inclusion.

Figure 1
Article identification and triage trial flow diagram, as recommended by the QUOROM statement for improving the methodological quality of systematic reviews and meta‐analyses.54 Reasons for exclusion were: R1, not a study of a rapid response system; R2, ineligible study design (not simple before‐after study, controlled before‐after study, interrupted time series, or randomized, controlled trial); R3, no eligible outcomes (did not report effect of RRS on in‐hospital cardiac arrest, unscheduled ICU admission, or inpatient mortality); R4, overlapping publication. Data from 1 article55 were pooled with an included article,34 and the other56 was excluded because it contained longer‐term follow‐up data from another included study.23

Characteristics of Included Trials

The characteristics of included studies are outlined in Table 2. Five studies were performed in Australia, 4 in the United States, and 4 in the United Kingdom. All were conducted in academic teaching hospitals. Two studies37, 38 focused on pediatric inpatients, and the remainder involved hospitalized adults. The RRS intervened for all hospitalized patients in all but 2 studies (1 of which focused on surgical inpatients33 and the other in which the RRS evaluated only patients discharged from the ICU3). In 2 studies,31, 39 the RRS was available to evaluate outpatients, such as hospital visitors, in addition to inpatients.0

Studies Included in Meta‐analysis
Study Location Hospital type Patient population RRS composition Measured outcomes Definition of unscheduled ICU admission Definition of cardiac arrest
  • RRS, rapid response system; ICU, intensive care unit; RN, registered nurse; ED, emergency department; DNR, do not resuscitate; NA, not applicable.

Buist et al., 200220 Australia Tertiary‐care academic hospital Adult inpatients ICU registrar DNR status Not supplied Any code blue team activation
Medical registrar Cardiac arrest
ICU nurse Unscheduled ICU admission
Mortality
Bellomo et al., 200321 Australia Tertiary‐care academic hospital Adult inpatients ICU nurse and ICU fellow attended all calls; ICU attending (8 AM‐8 PM) and medical registrar attended if requested Mortality NA Unresponsive, with no pulse or blood pressure, and basic life support initiated
Cardiac arrest
DNR status
Pittard et al., 200322 United Kingdom Tertiary‐care academic hospital Adult inpatients on surgical wards Senior critical care nurses and medical staff Unscheduled ICU admission Any patient not admitted directly from operating theater after elective surgery NA
Bellomo et al., 200431 Australia Tertiary‐care academic hospital Adult postoperative patients ICU attending Mortality Unscheduled ICU admission ICU length of stay DNR status Postoperative admission to ICU because of a clinical complication NA
ICU fellow Unscheduled ICU admission
ICU registered nurse ICU length of stay
Medical fellow DNR status
DeVita et al., 200432 United States Tertiary‐care academic hospital Adult inpatients ICU physician Cardiac arrest NA Any code blue team activation
Anesthesiologist
2 other physicians
2 ICU nurses
Floor nurse
Respiratory therapist
Garcea et al., 20043 United Kingdom Teaching hospital Adult patients discharged from ICU 2 ICU nurses Mortality Emergency readmissions to ICU NA
1 ICU nurse specialist Unscheduled ICU admission
ICU consultant physician (if needed)
Kenward et al., 200438 United Kingdom Teaching hospital Adult inpatients Not stated Mortality NA Loss of spontaneous circulation
Cardiac arrest
Priestley et al., 200433 United Kingdom Teaching hospital Adult inpatients ICU nurse consultant Mortality NA NA
ICU physician available if needed Overall length of stay
Hillman et al., 200534 Australia 23 hospitals (17 academic, 6 nonacademic) Adult inpatients Varied between study hospitals; required to have at least 1 MD and 1 RN from ED or ICU Mortality Any unscheduled admission to ICU from general ward No palpable pulse
Cardiac arrest
Unscheduled ICU admission
DNR status
Hunt et al., 2005 (abstract)36 United States Pediatric tertiary‐care academic hospital Pediatric inpatients Not provided Cardiac arrest NA Not provided
Meredith et al., 2005 (abstract)37 United States Tertiary‐care academic hospital Adult inpatients, outpatients, and visitors ICU registered nurse Mortality NA Not provided
Respiratory therapist
Tibballs et al., 200535 Australia Pediatric tertiary‐care academic hospital Pediatric inpatients ICU physician Cardiac arrest Not supplied Not provided
Medical registrar Unscheduled ICU admission
ED physician
ICU nurse
King et al., 200629 United States Tertiary‐care academic hospital Adult inpatients, outpatients, and visitors Hospitalist Cardiac arrest NA Not provided
Internal medicine resident
ICU nurse
Ward nurse
Pharmacist

RRS Structure, Calling Criteria, and Responsibilities

Seven studies22, 23, 31, 33, 34, 36, 37 that described the team composition used variants of the medical emergency team model, a physician‐led team (Table 2). In 6 of these 7 studies, the team included a critical care physician (attending or fellow) and an ICU nurse; in the sole RCT (the MERIT study36), the team structure varied between hospitals, consisting of a nurse and physician from either the emergency department or ICU. Hospitalists, who are involved in RRS responses at many U.S. hospitals, were primary team leaders of the RRS in only 1 study.31 In 2 studies34, 37 the RRS also responded to code blue calls, and in 4 studies23, 31, 33, 39 the RRS and the code blue team had separate personnel; the remaining studies did not define the distinction between RRS and code blue team.

Factors Affecting Internal Validity and Generalizability of Studies Included in Meta‐analysis
Study Contemporaneous control group Data reported at more than 1 time before/after intervention RRS calling rate reported Outcomes analysis included patients made DNR by team Blind measurement of nonobjective outcomes Intensivist always available Other QI efforts during study
  • SBA, simple before‐after (quasi‐experimental) study; ITS, interrupted time series; RCT, randomized controlled trial; NA, not applicable; NR, not reported; APLS, advanced pediatric life support

Buist et al., 200220 No No Yes No (mortality) No NR NR
Bellomo et al., 200321 No No Yes Yes (mortality) NA Yes (ICU fellow) No
Pittard et al., 200322 No No Yes NA No NR NR
Bellomo et al., 200431 No No Yes Yes (mortality) No Yes (ICU fellow) No
DeVita et al., 200432 No Yes Yes NA No Yes (critical care attending physician) NR
Garcea et al., 200433 No No No Unclear No NR NR
Kenward et al., 200438 No No Yes Unclear No NR NR
Priestley et al., 200433 No (interrupted time series) Yes No NA No NR NR
Hillman et al., 200534 Yes No Yes Unclear Yes NR No
Hunt et al., 2005 (abstract)36 No No Yes NA NR NR NR
Meredith et al., 2005 (abstract)37 No No Yes No NA No No
Tibballs et al., 200535 No No Yes Unclear No NR Yes (educational workshops/more training in APLS)
King et al., 200629 No Yes Yes NA No Yes No

In 4 studies the RRSs were led by nurses. One study published in abstract form39 used the rapid response team model, consisting of a critical care nurse and a respiratory therapist, with assistance as needed from the primary medical staff and a critical care physician. Three studies3, 24, 35 from UK hospitals used the critical care outreach (CCO) model, in which ICU‐trained nurses respond initially with assistance from intensivists. The CCO model also involves follow‐up on patients discharged from the ICU and proactive rounding on unstable ward patients.

The hospitals used broadly similar approaches to determining when to summon the RRS, relying on combinations of objective clinical criteria (eg, vital sign abnormalities) and subjective criteria (eg, acute mental status change, staff member concerned about patient's condition). Three studies3, 24, 35 used a formal clinical score (the Patient‐At‐Risk score or the Modified Early Warning score) to trigger calls to the RRS. Three studies, 2 of them from the same institution,23, 33 reported the frequency of specific triggers for RRS activation. Concern by bedside staff and respiratory distress were the most frequent activators of the RRS.

Study Internal Validity and Generalizability

One study,36 the MERIT trial, conducted in Australia, was a cluster‐randomized RCT (randomized by hospital) that adhered to recommended elements of design and reporting for studies of this type.41 In this study, hospitals in the control group received an educational intervention on caring for deteriorating patients only; hospitals in the intervention group received the educational module and started an RRS. An additional study35 identified itself as a randomized trial, but randomization occurred at the hospital ward level, with introduction of the intervention (critical care outreach) staggered so that at different points an individual ward could have been in either the control or intervention group; therefore, this study was considered an interrupted time series. All other trials included were before‐after studies with no contemporaneous control group

Most studies did not meet criteria for internal validity or generalizability (Table 2). Two studies3, 35 did not report the number of RRS calls during the study period. One study22 omitted patients whose resuscitation status was changed after RRS evaluation from the calculation of inpatient mortality; thus, the patients who had been made do not resuscitate by the RRS did not contribute to the calculated mortality rate. The disposition of these patients was unclear in another study.36 All studies measured clinical outcomes retrospectively, and no studies reported blinding of outcomes assessors for nonobjective outcomes (eg, unplanned ICU admission). Studies generally did not report on the availability of intensivists or if other quality improvement interventions targeting critically ill patients were implemented along with the RRS.

RRS Usage and Effects on Patient Outcomes

Seven studies2224, 34, 3638 reported enough information to calculate the RRS calling rate (4 studies24, 31, 39, 40 reported the total number of calls but not the number of admissions, and 2 studies3, 35 did not report either). In these 7 studies, the calling rate varied from 4.5 to 25.8 calls per 1000 admissions. Three studies documented the calling rate before and after the intervention: a study at a hospital with a preexisting RRS34 reported that the calling rate increased from 13.7 to 25.8 calls per 1000 admissions after an intensive education and publicity program; in a pediatric trial,38 the overall emergency calling rate (for cardiac arrests and medical emergencies) was reported to increase from 6.6 to 10.4 per 1000 admissions; and in the MERIT trial,36 calls increased from 3.1 to 8.7 per 1000 admissions.

Effects of RRS on Clinical Outcomes

Nine studies3, 22, 23, 33, 3537, 39, 40 reported the effect of an RRS on inpatient mortality, 9 studies22, 23, 31, 33, 34, 3638, 40 reported its effect on cardiopulmonary arrests, and 6 studies3, 22, 24, 33, 36, 37 reported its effect on unscheduled ICU admissions. Of these, 7 trials that reported mortality and cardiopulmonary arrests and 6 studies that reported unscheduled ICU admissions supplied sufficient data for meta‐analysis.

Observational studies demonstrated improvement in inpatient mortality, with a summary risk ratio of 0.82 (95% CI: 0.74‐0.91, heterogeneity I2 62.1%; Fig. 2). However, the magnitude of these improvements was very similar to that seen in the control group of the MERIT trial (RR 0.73, 95% CI: 0.53‐1.02). The intervention group of the MERIT trial also demonstrated a reduction in mortality that was not significantly different from that of the control group (RR 0.65, 95% CI: 0.48‐0.87). We found a similar pattern in studies reporting RRS effects on cardiopulmonary arrests (Fig. 3). The observational studies did not show any effect on the risk of unscheduled ICU admissions (summary RR 1.08, 95% CI: 0.96‐1.22, heterogeneity I2 79.1%) nor did the MERIT trial (Fig. 4).

Figure 2
Effect of RRS on inpatient mortality The forest plot compares the relative risk of mortality after implementation of RRS with that before RRS implementation. For the MERIT trial, we treated the 2 study arms (intervention and control) as separate before‐after trials in order to compare with the observational studies. The study by Garcea et al.3 evaluated the effect of RRS on readmission to the ICU. The supplied outcomes are for in‐hospital mortality of patients readmitted to the ICU only; thus, the baseline mortality rate is not reported. The study by Bellomo et al. (2004)33 evaluated the effect of RRS on postoperative patients only. The other study performed at the same institution and published in 200323 reported outcomes of all inpatients. Therefore, we subtracted the results of the 2004 study from those reported in the 2003 study to avoid counting the same outcomes twice (RR, relative risk; NR, not reported; NA, not applicable).
Figure 3
Effect of RRS on cardiopulmonary arrests The forest plot shows the relative risk of cardiopulmonary arrest after implementation of RRS. As in Figure 1, the MERIT trial intervention and control groups were treated as separate before‐after trials.
Figure 4
Effect of RRS on unscheduled ICU admissions The forest plot shows the relative risk of an unscheduled ICU admission after implementation of RRS. As shown in Figures 1 and 2, the MERIT trial intervention and control groups were treated as separate before‐after trials. The study by Garcea et al.3 evaluated the effect of RRS on readmissions to ICU. The supplied outcomes are for unscheduled readmissions to ICU; thus, the baseline unscheduled ICU admission rate is not reported.

DISCUSSION

Despite the strong face validity of the RRS concept, the current literature on medical emergency teams, rapid response teams, and critical care outreach suffers from substantial flaws that make it difficult to determine the effect of an RRS on patient outcomes. These flaws include the use of suboptimal study designs, failure to report important cointerventions, the methods in which outcomes were defined, and lack of verification of the validity of the outcomes measured. As a result, very little empiric data are available to define the effectiveness of RRSs or to provide guidance for hospitals planning to implement an RRS.

Though early studies reported that RRSs appeared to reduce mortality and cardiac arrest rates, the sole randomized trial of an RRS (the MERIT trial36) showed no differences between intervention and control hospitals for any clinical outcome. Both inpatient mortality and cardiac arrest rates declined in the intervention and control groups of the MERIT trial, and the reductions in these outcomes in observational trials were similar to those seen in the MERIT control group. This strongly implies that other factors besides the RRS were responsible for the results of previous before‐after studies. These studies, which have been widely cited by proponents of the RRS, suffer from methodological limitations intrinsic to the study design and issues with outcome measurement that may have introduced systematic bias; these factors likely explain the contrast between the generally positive results of the before‐after studies and the negative results of the MERIT trial.

Most early RRS trials used an uncontrolled before‐after study design, as is common in quality improvement studies.42 This study design cannot account for secular trends or other factors, including other QI interventions, that could influence the effect of an intervention.26 The statistically significant reduction in impatient mortality in the control arm of the MERIT trial is an instructive example; this decline could have been a result of the educational intervention on caring for deteriorating patients, other ongoing QI projects at the individual hospitals, or simply random variation during the relatively short (6‐month) follow‐up period. Such factors could also entirely account for the impressive results seen in the initial uncontrolled RRS studies. Nearly all the studies we reviewed also did not discuss any aspects of the hospital context that could influence outcomes for critically ill patients, such as the nurse‐staffing ratio,43 ICU bed availability,4446 overall hospital census,47 or availability of intensivists48 or hospitalists.49 Failure to control foror at least report important aspects ofthe environment in which the intervention was performed is akin to failing to report baseline patient comorbidities or concurrent therapies in a study of a drug's effectiveness.

Our review also suggests how bias in the measurement of clinical outcomes may have contributed to the apparent effect of RRSs. In 1 before‐after study, patients for whom RRS activation resulted in a change code status to do not resuscitate (DNR) were excluded from calculations of mortality,22, 50 resulting in underreporting of mortality after RRS implementation. Disposition of such patients was unclear in 3 other studies.3, 36, 40 Some studies22, 34 defined cardiopulmonary arrest as any activation of the code blue team, regardless of whether the patient was actually in cardiac arrest. This almost inevitably would result in fewer arrests after implementation of the RRS, as the indications for calling the code blue team would be narrower. Finally, nearly all studies used trends in nonobjective primary outcomes (eg, unplanned ICU transfer) to support RRS effects but did not validate any of these outcomes (eg, how often did reviewers agree an ICU transfer was preventable), and none of the assessors of these outcomes were blinded.

Some have attributed the MERIT trial not finding the RRS beneficial to inadequate implementation, as the RRS calling rate of 8.7 calls per 1000 admissions was less than the 15 calls per 1000 admissions cited as optimal in a mature RRS.51 However, published studies generally reported a calling rate of 4‐5 calls per 1000 admissions,22, 23, 37 with only 1 trial reporting a higher calling rate.34

A recent commentary20 and a systematic review of critical care outreach teams21 both addressed the effectiveness of RRSs. We sought to examine the effects of all RRS subtypes and using quantitative analysis and analysis of methodological quality, to determine the overall effect of RRSs. The results of our analysis (which included data from several newer studies31, 38, 39) support and extend the conclusion of prior reviews that RRSs, although a potentially promising intervention, do not unequivocally benefit patients and are not worthy of more widespread use until more evidence becomes available. Our analysis also demonstrates that many studies widely cited as supporting wide implementation of RRSs are flawed and probably not generalizable.

Despite these caveats, RRSs remain an intuitively attractive concept and may be of benefit at some hospitals. Further studies in this area should focus on identifying which patient populations are at high risk for clinical decompensation, identifying the role of clinical structures of care (eg, nurse‐staffing ratio, presence of hospitalists) in preventing adverse outcomes and determining which specific RRS model is most effective. As well, more information is needed about educating bedside staff and RRS team members, as this is likely critical to success of the team. Unfortunately, only the article by King et al.31 provided sufficient detail about the implementation process to assist hospitals in planning an RRS. The remaining articles had only scant details about the intervention and its implementation, a common problem noted in the quality improvement literature.42, 52, 53

Our analysis had several limitations. We attempted to identify as many RRS trials as possible by searching multiple databases and reviewing abstract proceedings, but as the RRS literature is in its infancy, we may not have located other unpublished studies or gray literature. There is no validated system for evaluating the methodological strength of nonrandomized studies; therefore, we assessed study quality on the basis of prespecified criteria for internal and external validity. Finally, we found significant statistical heterogeneity in our quantitative analyses, indicating that the variability between individual studies in treatment effects was greater than that expected by chance. As the primary reasons we conducted a meta‐analysis was to compare the results of before‐after trials with those of the randomized MERIT trial, we did not further explore the reasons for this heterogeneity, although variation in patient populations and RRS structure likely accounts for a significant proportion of the heterogeneity.

Although there is a theoretical basis for implementing a rapid response system, the published literature shows inconsistent benefits to patients and suffers from serious methodological flaws. Future studies of RRSs should attempt to define which patient populations are at risk, the essential characteristics of RRSs, effective implementation strategies, andmost importantwhether any RRS improves clinical outcomes. Until such evidence is available, hospitals should not be mandated to establish an RRS and should consider prioritizing quality improvement resources for interventions with a stronger evidence base.

Acknowledgements

The authors thank Emmanuel King, MD, for graciously providing a copy of his manuscript prior to publication and Alexis Meredith, MD, for providing additional information regarding his study. Dr. Shojania holds a Government of Canada Research Chair in Patient Safety and Quality Improvement.

APPENDIX

0

Literature Search Strategy (Performed through August 2006)
Search terms Citations
1 {(rapid [ti] AND (response [ti] OR resuscitation [ti]) OR (patient at risk [ti])} AND (program [ti] OR team* [ti] OR service* [ti]) 23
2 medical emergency team* [ti] OR medical crisis team* [ti] OR {(critical [ti] OR intensive [ti]) AND care [ti] AND outreach [ti]} 87
3 hospital [ti] AND resuscitation [ti] AND team* [ti] 11
4 medical emergency team* [ab] OR rapid response team [ab] OR medical crisis team* [ab] 89
5 #1 OR #2 OR #3 OR #4 158
6 Resuscitation [mh] OR heart arrest [mh] OR hospital mortality [mh] 72,488
7 (patient care team [mh] OR critical care [mh] OR intensive care units [mh]) AND (patient readmission [mh] OR organization and administration [mh]) 20,321
8 #6 AND #7 1,419
9 {(randomised[ti] OR randomized[ti] OR controlled[ti] OR intervention[ti] OR evaluation[ti] OR comparative[ti] OR effectiveness[ti] OR evaluation[ti] OR feasibility[ti]) AND (trial[ti] OR studies[ti] OR study[ti] OR program[ti] OR design[ti])} OR clinical trial[pt] OR randomized controlled trial[pt] OR epidemiologic studies[mh] OR evaluation studies[mh] OR comparative study[mh] OR feasibility studies[mh] OR intervention studies[mh] OR program evaluation[mh] OR epidemiologic research design[mh] OR systematic5 2,688,847
10 #8 AND #9 748
11 #5 OR #10 806
References
  1. Lee A,Bishop G,Hillman KM,Daffurn K.The medical emergency team.Anaesth Intensive Care.1995;23(2):183186.
  2. Berwick DM,Calkins DR,McCannon CJ,Hackbarth AD.The 100 000 Lives Campaign: setting a goal and a deadline for improving health care quality.JAMA.2006;295(3):3247.
  3. Garcea G,Thomasset S,McClelland L,Leslie A,Berry DP.Impact of a critical care outreach team on critical care readmissions and mortality.Acta Anaesthesiol Scand2004;48:10961100.
  4. Fletcher SJ,Flabouris A.The patient‐at‐risk team.Anaesthesia.2000;55(2):198.
  5. Devita MA,Bellomo R,Hillman K, et al.Findings of the First Consensus Conference on Medical Emergency Teams.Crit Care Med.2006;34:24632478.
  6. Cioffi J.Recognition of patients who require emergency assistance: a descriptive study.Heart Lung.2000;29(4):262268.
  7. Hillman KM,Bristow PJ,Chey T, et al.Antecedents to hospital deaths.Intern Med J.2001;31:343348.
  8. Hillman KM,Bristow PJ,Chey T, et al.Duration of life‐threatening antecedents prior to intensive care admission.Intensive Care Med.2002;28:16291634.
  9. Hodgetts TJ,Kenward G,Vlachonikolis IG,Payne S,Castle N.The identification of risk factors for cardiac arrest and formulation of activation criteria to alert a medical emergency team.Resuscitation.2002;54:125131.
  10. Kause J,Smith G,Prytherch D,Parr M,Flabouris A,Hillman K.A comparison of antecedents to cardiac arrests, deaths and emergency intensive care admissions in Australia and New Zealand, and the United Kingdom—the ACADEMIA study.Resuscitation.2004;62:275282.
  11. Subbe CP,Kruger M,Rutherford P,Gemmel L.Validation of a modified Early Warning Score in medical admissions.QJM.2001;94:521526.
  12. Young MP,Gooder VJ,McBride K,James B,Fisher ES.Inpatient transfers to the intensive care unit: delays are associated with increased mortality and morbidity.J Gen Intern Med.2003;18(2):7783.
  13. Schein RM,Hazday N,Pena M,Ruben BH,Sprung CL.Clinical antecedents to in‐hospital cardiopulmonary arrest.Chest.1990;98:13881392.
  14. Franklin C,Mathew J.Developing strategies to prevent inhospital cardiac arrest: analyzing responses of physicians and nurses in the hours before the event.Crit Care Med.1994;22(2):244247.
  15. DeVita MA,Bellomo R,Hillman K.Introduction to the rapid response systems series.Jt Comm J Qual Patient Saf.2006;32:359360.
  16. Bristow PJ,Hillman KM,Chey T, et al.Rates of in‐hospital arrests, deaths and intensive care admissions: the effect of a medical emergency team.Med J Aust.2000;173:236240.
  17. Goldhill DR,Worthington L,Mulcahy A,Tarling M,Sumner A.The patient‐at‐risk team: identifying and managing seriously ill ward patients.Anaesthesia.1999;54:853860.
  18. Kerridge RK.The medical emergency team: no evidence to justify not implementing change.Med J Aust.2000;173:228229.
  19. Kerridge RK,Saul WP.The medical emergency team, evidence‐based medicine and ethics.Med J Aust.2003;179:313315.
  20. Winters BD,Pham J,Pronovost PJ.Rapid response teams—walk, don't run.JAMA.2006;296:16451647.
  21. Esmonde L,McDonnell A,Ball C, et al.Investigating the effectiveness of critical care outreach services: a systematic review.Intensive Care Med.2006;32:17131721.
  22. Buist MD,Moore GE,Bernard SA,Waxman BP,Anderson JN,Nguyen TV.Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: preliminary study.BMJ.2002;324:387390.
  23. Bellomo R,Goldsmith D,Uchino S, et al.A prospective before‐and‐after trial of a medical emergency team.Med J Aust.2003;179:283287.
  24. Pittard AJ.Out of our reach? Assessing the impact of introducing a critical care outreach service.Anaesthesia.2003;58:882885.
  25. Cochrane Collaboration Effective Practice and Organisation of Care group. Available at: http://www.epoc.uottawa.ca/inttime.pdf.Accessed August 4,2006.
  26. Shadish W,Cook T,Campbell D.Experimental and Quasi‐Experimental Designs for Generalized Causal Inference.Boston, MA:Houghton Mifflin;2002.
  27. Cretikos M,Parr M,Hillman K, et al.Guidelines for the uniform reporting of data for Medical Emergency Teams.Resuscitation.2006;68(1):1125.
  28. Kerry SM,Bland JM.The intracluster correlation coefficient in cluster randomisation.BMJ.1998;316:1455.
  29. Donner A,Klar N.Issues in the meta‐analysis of cluster randomized trials.Stat Med.2002;21:29712980.
  30. Higgins JP,Thompson SG,Deeks JJ,Altman DG.Measuring inconsistency in meta‐analyses.BMJ.2003;327:557560.
  31. King E,Horvath R,Shulkin D.Establishing a rapid response team (RRT) in an academic hospital: one year's experience.J Hosp Med.2006;1:296305.
  32. Sebat F,Johnson D,Musthafa AA, et al.A multidisciplinary community hospital program for early and rapid resuscitation of shock in nontrauma patients.Chest.2005;127:17291743.
  33. Bellomo R,Goldsmith D,Uchino S, et al.Prospective controlled trial of effect of medical emergency team on postoperative morbidity and mortality rates.Crit Care Med.2004;32:916921.
  34. DeVita MA,Braithwaite RS,Mahidhara R,Stuart S,Foraida M,Simmons RL.Use of medical emergency team responses to reduce hospital cardiopulmonary arrests.Qual Saf Health Care.2004;13:251254.
  35. Priestley G,Watson W,Rashidian A, et al.Introducing critical care outreach: a ward‐randomised trial of phased introduction in a general hospital.Intensive Care Med.2004;30:13981404.
  36. Hillman K,Chen J,Cretikos M, et al.Introduction of the medical emergency team (MET) system: a cluster‐randomised controlled trial.Lancet.2005;365:20912097.
  37. Tibballs J,Kinney S,Duke T,Oakley E,Hennessy M.Reduction of paediatric in‐patient cardiac arrest and death with a medical emergency team: preliminary results.Arch Dis Child.2005;90:11481152.
  38. Hunt EA,Shilkofski N,Rinke ML, et al.The effect of transition from a traditional code team to a rapid response team in a children's center: a before and after intervention trial [abstract].Crit Care Med.2005;33(12 suppl):A17.
  39. Meredith A,Simpson SQ,Cleek C,Williamson T,O'Brien‐Ladner A.Improved hospital mortality by institution of a rapid response team in a university hospital.Chest.2005;128(suppl S):182S.
  40. Kenward G,Castle N,Hodgetts T,Shaikh L.Evaluation of a medical emergency team one year after implementation.Resuscitation.2004;61(3):257263.
  41. Campbell MK,Elbourne DR,Altman DG.CONSORT statement: extension to cluster randomised trials.BMJ.2004;328:702708.
  42. Shojania KG,Grimshaw JM.Evidence‐Based Quality Improvement: The State Of The Science.Health Aff.2005;24(1):138150.
  43. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  44. Sprung CL,Geber D,Eidelman LA, et al.Evaluation of triage decisions for intensive care admission.Crit Care Med.1999;27:10731079.
  45. Strauss MJ,LoGerfo JP,Yeltatzie JA,Temkin N,Hudson LD.Rationing of intensive care unit services. An everyday occurrence.JAMA.1986;255:11431146.
  46. Selker HP,Griffith JL,Dorey FJ,D'Agostino RB.How do physicians adapt when the coronary care unit is full? A prospective multicenter study.JAMA.1987;257:11811185.
  47. Sprivulis PC,Da Silva JA,Jacobs IG,Frazer AR,Jelinek GA.The association between hospital overcrowding and mortality among patients admitted via Western Australian emergency departments.Med J Aust.2006;184:208212.
  48. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288:21512'62.
  49. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137:859865.
  50. Subbe CP.Critical care outreach team's effect on patient outcome: other conclusions are possible.BMJ.2004;328:347; author reply
  51. The “MERIT” Trial of medical emergency teams in Australia: an analysis of findings and implications for the 100,000 Lives Campaign. Institute for Healthcare Improvement,2006. Available at: http://www.ihi.org/NR/rdonlyres/F3401FEF‐2179‐4403‐8F67–B9255C57E207/0/LancetAnalysis81505.pdf. Accessed August 17, 2006.
  52. Grimshaw J,Eccles M,Thomas R, et al.Toward evidence‐based quality improvement. evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966‐1998.J Gen Intern Med.2006;21(suppl 2):S14S20.
  53. Hagedorn H,Hogan M,Smith JL, et al.lessons learned about implementing research evidence into clinical practice. Experiences from VA QUERI.J Gen Intern Med.2006;21(suppl 2):S21S24.
  54. Moher D,Cook DJ,Eastwood S,Olkin I,Rennie D,Stroup DF.Improving the quality of reports of meta‐analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta‐analyses.Lancet.1999;354:18961900.
  55. Foraida MI,DeVita MA,Braithwaite RS,Stuart SA,Brooks MM,Simmons RL.Improving the utilization of medical crisis teams (Condition C) at an urban tertiary care hospital.J Crit Care.2003;18(2):8794.
  56. Jones D,Bellomo R,Bates S, et al.Long term effect of a medical emergency team on cardiac arrests in a teaching hospital.Crit Care.2005;9:R808R815.
Article PDF
Issue
Journal of Hospital Medicine - 2(6)
Publications
Page Number
422-432
Legacy Keywords
systematic review, rapid response systems
Sections
Article PDF
Article PDF

A medical emergency team1 is a group of clinicians trained to quickly assess and treat hospitalized patients showing acute signs of clinical deterioration. Equivalent terms used are rapid response team,2 critical care outreach team,3 and patient‐at‐risk team.4 A consensus panel5 recently endorsed use of the term rapid response system (RRS) to denote any system that uses a standard set of clinical criteria to summon caregivers to the bedside of a patient who is deemed unstable but not in cardiopulmonary arrest (in which case a standard resuscitation team would be summoned). Such teams primarily evaluate patients on general hospital wards.

RRSs have been developed in response to data indicating that patients frequently demonstrate premonitory signs or receive inadequate care prior to unanticipated intensive care unit (ICU) admission, cardiopulmonary arrest, or death outside the ICU.614 Earlier identification and treatment of such patients could prevent adverse clinical outcomes. The structure of RRSs varies but generally includes a physician and nurse and may also include other staff such as respiratory therapists.5 Teams are summoned by hospital staff to assess patients meeting specific clinical criteria (see box) about whom the bedside staff has significant concern.150

Example of Rapid Response System Calling Criteria for Adult Patients
Any staff member may call the team if 1 of the following criteria is met:
Heart rate > 140/min or < 40/min
Respiratory rate > 28/min or < 8/min
Systolic blood pressure > 180 mmHg or < 90 mm Hg
Oxygen saturation < 90% despite supplementation
Acute change in mental status
Urine output < 50 cc over 4 hours
Staff member has significant concern about patient's condition
Additional criteria used at some institutions:
Chest pain unrelieved by nitroglycerin
Threatened airway
Seizure
Uncontrolled pain

Initial studies of RRSs, performed primarily in Australia and the United Kingdom, showed promising reductions in unanticipated ICU admissions, cardiac arrests, and even overall inpatient mortality.1, 16, 17 The considerable enthusiasm generated by these studies18, 19 resulted in the Institute for Healthcare Improvement (IHI) incorporating RRSs into its 100,000 Lives campaign,2 and RRSs are now being implemented in the more than 3000 U.S. hospitals that joined the campaign. However, a recent commentary on rapid response teams20 and a systematic review of critical care outreach teams21 have raised concerns that this widespread implementation may not be justified by the available evidence. We performed a systematic review of studies of all variations of RRSs in order to determine their effect on patient outcomes and to characterize variations in their organization and implementation.

METHODS

Literature Search and Inclusion and Exclusion Criteria

We systematically searched MEDLINE, CINAHL, and BIOSIS through August 2006 for relevant studies using the various terms for RRSs (eg, medical emergency team, rapid response team, critical care outreach) and medical subject headings relevant to inpatient care and critical illness (eg, patient care team and resuscitation; the full search strategy is given in the Appendix). We also reviewed the abstract lists from the 2004 and 2005 American Thoracic Society and Society of Critical Care Medicine annual meetings and scanned reference lists from key articles.

We screened the abstracts of the articles identified by the search, and 2 independent reviewers abstracted potentially relevant articles using a standardized data abstraction form. Disagreements between the reviewers were resolved by consensus and, if necessary, discussion with a third reviewer. We included randomized controlled trials (RCTs), controlled before‐after studies, and interrupted time series, including simple before‐after studies with no contemporaneous control group, though we planned to separately analyze data from controlled studies if possible. We included only English‐language articles.

On the basis of RRS features in widely cited articles2224 and the recommendations of a recent consensus statement,5 we defined an RRS as having the following characteristics: (1) its primary responsibility is to intervene whenever hospitalized patients become unstable before cardiopulmonary arrest occurs; (2) it must primarily provide care outside the ICU and emergency department; (3) specific clinical criteria must be in place that define instability and trigger a call to the team; and (4) it must be expected to respond within a specified time. We defined these criteria in order to distinguish studies of RRSs from studies of cardiac arrest (code blue) teams or traditional consulting services.

To be included in the analysis, articles had to report the effects of a rapid response system on at least 1 of these outcomes: inpatient mortality, inpatient cardiac arrest, or unscheduled ICU transfer. We used the definitions of cardiac arrest and unscheduled ICU transfer given in the primary studies. In addition to these outcomes, we abstracted information on the number of admissions and the number of RRS calls during the study period. To maximize the comparability of study outcomes, we calculated the rates of mortality, cardiac arrest, unscheduled ICU transfer, and RRS calls per 1000 admissions for studies that did not supply data in this fashion.

Assessment of Study Quality

Quality scoring instruments for studies included in systematic reviews generally focus on randomized controlled trials, which we anticipated would account for a minority of included studies. On the basis of recommendations for the assessment of methodology for nonrandomized study designs,25, 26 we identified and abstracted 4 important determinants of internal validity (Table 1). The consensus statement5 recommends monitoring the effectiveness of RRSs by measuring the rate of unscheduled ICU admissions (defined as an unplanned admission to the ICU from a general ward27) and cardiac arrests of patients who were not listed as do not resuscitate (DNR). As the definition of unscheduled ICU admission allows room for subjectivity, we considered the blinding of assessment of this outcome to study group assignment to be important, especially for retrospective studies. Measurement of cardiac arrests should be less susceptible to blinding issues, but one of the functions of an RRS can be to initiate discussions that result in changes in the goals of care and code status.22 Thus, excluding patients made DNR by the team from cardiac arrest calculations could falsely lower the cardiac arrest rate.

Study Quality Criteria
Quality measures
  • Elements affecting study internal validity and translatability. These elements were chosen based on the methods of the Cochrane collaboration.25 These criteria were not used to determine article inclusion or exclusion.

A. Internal validity
1. Did the study have a contemporaneous control group?
2. If there was no contemporaneous control group, did the study report data for more than 1 time point before and after the intervention?
3. Were nonobjective primary outcomes (eg, unplanned ICU transfer) measured in a blinded fashion?
4. Were patients made DNR by the RRS included in calculations of the cardiac arrest and mortality rates?
B. Generalizability
5. Was the intervention performed independent of other quality improvement interventions targeting the care of critically ill patients?
6. Did the study report the number of admissions and RRS calls during the study period?
7. Did the study report the availability of intensivists before and after the intervention?

We also abstracted 3 separate elements of study quality pertaining to the external validity or generalizability of included studies (Table 1). These elements were defined a priori by consensus reached by discussion among the reviewers. These elements were intended to provide a framework for interpreting the included studies and to guide subgroup analyses. They were not used to form a composite quality score.

Statistical Analysis

We performed a random‐effects meta‐analysis to calculate summary risk ratios with 95% confidence intervals for the effects of RRSs on inpatient mortality, cardiopulmonary arrest, and unscheduled ICU admission. Included in the meta‐analysis were the studies that reported total number of admissions and incidence of each outcome before and after institution of the RRS. For randomized trials that reported pre‐ and postintervention data, we treated the intervention and control groups as separate trials in order to be able to compare their effects with the before‐after trials. For studies that reported results adjusted for clustering (ie, by hospital), we back‐calculated the unadjusted results by multiplying the standard error by the square of the design factor coefficient.28, 29 We calculated the I2 statistic to assess heterogeneity.30 All analyses were performed using Stata version 8.2 (Stata Corporation, College Station, TX).

RESULTS

The database searches identified 861 citations, and 1 additional prepublication study was supplied by the study's first author31; 89 articles underwent full‐text review (Fig. 1). Most studies excluded during the full‐text review did not meet our criteria for a study of an RRS or were observational studies or review articles. For instance, Sebat et al.32 published a study of a shock team at a community hospital that intervened when any patient suffered nontraumatic shock; the study did not meet our inclusion criteria as all patients were admitted to the ICU, most directly from the emergency department. Another frequently cited study, by Bristow et al.,16 was excluded as it was a case‐control study. Thirteen studies,3, 2224, 31, 334011 full‐length studies and 2 abstractsmet all criteria for inclusion.

Figure 1
Article identification and triage trial flow diagram, as recommended by the QUOROM statement for improving the methodological quality of systematic reviews and meta‐analyses.54 Reasons for exclusion were: R1, not a study of a rapid response system; R2, ineligible study design (not simple before‐after study, controlled before‐after study, interrupted time series, or randomized, controlled trial); R3, no eligible outcomes (did not report effect of RRS on in‐hospital cardiac arrest, unscheduled ICU admission, or inpatient mortality); R4, overlapping publication. Data from 1 article55 were pooled with an included article,34 and the other56 was excluded because it contained longer‐term follow‐up data from another included study.23

Characteristics of Included Trials

The characteristics of included studies are outlined in Table 2. Five studies were performed in Australia, 4 in the United States, and 4 in the United Kingdom. All were conducted in academic teaching hospitals. Two studies37, 38 focused on pediatric inpatients, and the remainder involved hospitalized adults. The RRS intervened for all hospitalized patients in all but 2 studies (1 of which focused on surgical inpatients33 and the other in which the RRS evaluated only patients discharged from the ICU3). In 2 studies,31, 39 the RRS was available to evaluate outpatients, such as hospital visitors, in addition to inpatients.0

Studies Included in Meta‐analysis
Study Location Hospital type Patient population RRS composition Measured outcomes Definition of unscheduled ICU admission Definition of cardiac arrest
  • RRS, rapid response system; ICU, intensive care unit; RN, registered nurse; ED, emergency department; DNR, do not resuscitate; NA, not applicable.

Buist et al., 200220 Australia Tertiary‐care academic hospital Adult inpatients ICU registrar DNR status Not supplied Any code blue team activation
Medical registrar Cardiac arrest
ICU nurse Unscheduled ICU admission
Mortality
Bellomo et al., 200321 Australia Tertiary‐care academic hospital Adult inpatients ICU nurse and ICU fellow attended all calls; ICU attending (8 AM‐8 PM) and medical registrar attended if requested Mortality NA Unresponsive, with no pulse or blood pressure, and basic life support initiated
Cardiac arrest
DNR status
Pittard et al., 200322 United Kingdom Tertiary‐care academic hospital Adult inpatients on surgical wards Senior critical care nurses and medical staff Unscheduled ICU admission Any patient not admitted directly from operating theater after elective surgery NA
Bellomo et al., 200431 Australia Tertiary‐care academic hospital Adult postoperative patients ICU attending Mortality Unscheduled ICU admission ICU length of stay DNR status Postoperative admission to ICU because of a clinical complication NA
ICU fellow Unscheduled ICU admission
ICU registered nurse ICU length of stay
Medical fellow DNR status
DeVita et al., 200432 United States Tertiary‐care academic hospital Adult inpatients ICU physician Cardiac arrest NA Any code blue team activation
Anesthesiologist
2 other physicians
2 ICU nurses
Floor nurse
Respiratory therapist
Garcea et al., 20043 United Kingdom Teaching hospital Adult patients discharged from ICU 2 ICU nurses Mortality Emergency readmissions to ICU NA
1 ICU nurse specialist Unscheduled ICU admission
ICU consultant physician (if needed)
Kenward et al., 200438 United Kingdom Teaching hospital Adult inpatients Not stated Mortality NA Loss of spontaneous circulation
Cardiac arrest
Priestley et al., 200433 United Kingdom Teaching hospital Adult inpatients ICU nurse consultant Mortality NA NA
ICU physician available if needed Overall length of stay
Hillman et al., 200534 Australia 23 hospitals (17 academic, 6 nonacademic) Adult inpatients Varied between study hospitals; required to have at least 1 MD and 1 RN from ED or ICU Mortality Any unscheduled admission to ICU from general ward No palpable pulse
Cardiac arrest
Unscheduled ICU admission
DNR status
Hunt et al., 2005 (abstract)36 United States Pediatric tertiary‐care academic hospital Pediatric inpatients Not provided Cardiac arrest NA Not provided
Meredith et al., 2005 (abstract)37 United States Tertiary‐care academic hospital Adult inpatients, outpatients, and visitors ICU registered nurse Mortality NA Not provided
Respiratory therapist
Tibballs et al., 200535 Australia Pediatric tertiary‐care academic hospital Pediatric inpatients ICU physician Cardiac arrest Not supplied Not provided
Medical registrar Unscheduled ICU admission
ED physician
ICU nurse
King et al., 200629 United States Tertiary‐care academic hospital Adult inpatients, outpatients, and visitors Hospitalist Cardiac arrest NA Not provided
Internal medicine resident
ICU nurse
Ward nurse
Pharmacist

RRS Structure, Calling Criteria, and Responsibilities

Seven studies22, 23, 31, 33, 34, 36, 37 that described the team composition used variants of the medical emergency team model, a physician‐led team (Table 2). In 6 of these 7 studies, the team included a critical care physician (attending or fellow) and an ICU nurse; in the sole RCT (the MERIT study36), the team structure varied between hospitals, consisting of a nurse and physician from either the emergency department or ICU. Hospitalists, who are involved in RRS responses at many U.S. hospitals, were primary team leaders of the RRS in only 1 study.31 In 2 studies34, 37 the RRS also responded to code blue calls, and in 4 studies23, 31, 33, 39 the RRS and the code blue team had separate personnel; the remaining studies did not define the distinction between RRS and code blue team.

Factors Affecting Internal Validity and Generalizability of Studies Included in Meta‐analysis
Study Contemporaneous control group Data reported at more than 1 time before/after intervention RRS calling rate reported Outcomes analysis included patients made DNR by team Blind measurement of nonobjective outcomes Intensivist always available Other QI efforts during study
  • SBA, simple before‐after (quasi‐experimental) study; ITS, interrupted time series; RCT, randomized controlled trial; NA, not applicable; NR, not reported; APLS, advanced pediatric life support

Buist et al., 200220 No No Yes No (mortality) No NR NR
Bellomo et al., 200321 No No Yes Yes (mortality) NA Yes (ICU fellow) No
Pittard et al., 200322 No No Yes NA No NR NR
Bellomo et al., 200431 No No Yes Yes (mortality) No Yes (ICU fellow) No
DeVita et al., 200432 No Yes Yes NA No Yes (critical care attending physician) NR
Garcea et al., 200433 No No No Unclear No NR NR
Kenward et al., 200438 No No Yes Unclear No NR NR
Priestley et al., 200433 No (interrupted time series) Yes No NA No NR NR
Hillman et al., 200534 Yes No Yes Unclear Yes NR No
Hunt et al., 2005 (abstract)36 No No Yes NA NR NR NR
Meredith et al., 2005 (abstract)37 No No Yes No NA No No
Tibballs et al., 200535 No No Yes Unclear No NR Yes (educational workshops/more training in APLS)
King et al., 200629 No Yes Yes NA No Yes No

In 4 studies the RRSs were led by nurses. One study published in abstract form39 used the rapid response team model, consisting of a critical care nurse and a respiratory therapist, with assistance as needed from the primary medical staff and a critical care physician. Three studies3, 24, 35 from UK hospitals used the critical care outreach (CCO) model, in which ICU‐trained nurses respond initially with assistance from intensivists. The CCO model also involves follow‐up on patients discharged from the ICU and proactive rounding on unstable ward patients.

The hospitals used broadly similar approaches to determining when to summon the RRS, relying on combinations of objective clinical criteria (eg, vital sign abnormalities) and subjective criteria (eg, acute mental status change, staff member concerned about patient's condition). Three studies3, 24, 35 used a formal clinical score (the Patient‐At‐Risk score or the Modified Early Warning score) to trigger calls to the RRS. Three studies, 2 of them from the same institution,23, 33 reported the frequency of specific triggers for RRS activation. Concern by bedside staff and respiratory distress were the most frequent activators of the RRS.

Study Internal Validity and Generalizability

One study,36 the MERIT trial, conducted in Australia, was a cluster‐randomized RCT (randomized by hospital) that adhered to recommended elements of design and reporting for studies of this type.41 In this study, hospitals in the control group received an educational intervention on caring for deteriorating patients only; hospitals in the intervention group received the educational module and started an RRS. An additional study35 identified itself as a randomized trial, but randomization occurred at the hospital ward level, with introduction of the intervention (critical care outreach) staggered so that at different points an individual ward could have been in either the control or intervention group; therefore, this study was considered an interrupted time series. All other trials included were before‐after studies with no contemporaneous control group

Most studies did not meet criteria for internal validity or generalizability (Table 2). Two studies3, 35 did not report the number of RRS calls during the study period. One study22 omitted patients whose resuscitation status was changed after RRS evaluation from the calculation of inpatient mortality; thus, the patients who had been made do not resuscitate by the RRS did not contribute to the calculated mortality rate. The disposition of these patients was unclear in another study.36 All studies measured clinical outcomes retrospectively, and no studies reported blinding of outcomes assessors for nonobjective outcomes (eg, unplanned ICU admission). Studies generally did not report on the availability of intensivists or if other quality improvement interventions targeting critically ill patients were implemented along with the RRS.

RRS Usage and Effects on Patient Outcomes

Seven studies2224, 34, 3638 reported enough information to calculate the RRS calling rate (4 studies24, 31, 39, 40 reported the total number of calls but not the number of admissions, and 2 studies3, 35 did not report either). In these 7 studies, the calling rate varied from 4.5 to 25.8 calls per 1000 admissions. Three studies documented the calling rate before and after the intervention: a study at a hospital with a preexisting RRS34 reported that the calling rate increased from 13.7 to 25.8 calls per 1000 admissions after an intensive education and publicity program; in a pediatric trial,38 the overall emergency calling rate (for cardiac arrests and medical emergencies) was reported to increase from 6.6 to 10.4 per 1000 admissions; and in the MERIT trial,36 calls increased from 3.1 to 8.7 per 1000 admissions.

Effects of RRS on Clinical Outcomes

Nine studies3, 22, 23, 33, 3537, 39, 40 reported the effect of an RRS on inpatient mortality, 9 studies22, 23, 31, 33, 34, 3638, 40 reported its effect on cardiopulmonary arrests, and 6 studies3, 22, 24, 33, 36, 37 reported its effect on unscheduled ICU admissions. Of these, 7 trials that reported mortality and cardiopulmonary arrests and 6 studies that reported unscheduled ICU admissions supplied sufficient data for meta‐analysis.

Observational studies demonstrated improvement in inpatient mortality, with a summary risk ratio of 0.82 (95% CI: 0.74‐0.91, heterogeneity I2 62.1%; Fig. 2). However, the magnitude of these improvements was very similar to that seen in the control group of the MERIT trial (RR 0.73, 95% CI: 0.53‐1.02). The intervention group of the MERIT trial also demonstrated a reduction in mortality that was not significantly different from that of the control group (RR 0.65, 95% CI: 0.48‐0.87). We found a similar pattern in studies reporting RRS effects on cardiopulmonary arrests (Fig. 3). The observational studies did not show any effect on the risk of unscheduled ICU admissions (summary RR 1.08, 95% CI: 0.96‐1.22, heterogeneity I2 79.1%) nor did the MERIT trial (Fig. 4).

Figure 2
Effect of RRS on inpatient mortality The forest plot compares the relative risk of mortality after implementation of RRS with that before RRS implementation. For the MERIT trial, we treated the 2 study arms (intervention and control) as separate before‐after trials in order to compare with the observational studies. The study by Garcea et al.3 evaluated the effect of RRS on readmission to the ICU. The supplied outcomes are for in‐hospital mortality of patients readmitted to the ICU only; thus, the baseline mortality rate is not reported. The study by Bellomo et al. (2004)33 evaluated the effect of RRS on postoperative patients only. The other study performed at the same institution and published in 200323 reported outcomes of all inpatients. Therefore, we subtracted the results of the 2004 study from those reported in the 2003 study to avoid counting the same outcomes twice (RR, relative risk; NR, not reported; NA, not applicable).
Figure 3
Effect of RRS on cardiopulmonary arrests The forest plot shows the relative risk of cardiopulmonary arrest after implementation of RRS. As in Figure 1, the MERIT trial intervention and control groups were treated as separate before‐after trials.
Figure 4
Effect of RRS on unscheduled ICU admissions The forest plot shows the relative risk of an unscheduled ICU admission after implementation of RRS. As shown in Figures 1 and 2, the MERIT trial intervention and control groups were treated as separate before‐after trials. The study by Garcea et al.3 evaluated the effect of RRS on readmissions to ICU. The supplied outcomes are for unscheduled readmissions to ICU; thus, the baseline unscheduled ICU admission rate is not reported.

DISCUSSION

Despite the strong face validity of the RRS concept, the current literature on medical emergency teams, rapid response teams, and critical care outreach suffers from substantial flaws that make it difficult to determine the effect of an RRS on patient outcomes. These flaws include the use of suboptimal study designs, failure to report important cointerventions, the methods in which outcomes were defined, and lack of verification of the validity of the outcomes measured. As a result, very little empiric data are available to define the effectiveness of RRSs or to provide guidance for hospitals planning to implement an RRS.

Though early studies reported that RRSs appeared to reduce mortality and cardiac arrest rates, the sole randomized trial of an RRS (the MERIT trial36) showed no differences between intervention and control hospitals for any clinical outcome. Both inpatient mortality and cardiac arrest rates declined in the intervention and control groups of the MERIT trial, and the reductions in these outcomes in observational trials were similar to those seen in the MERIT control group. This strongly implies that other factors besides the RRS were responsible for the results of previous before‐after studies. These studies, which have been widely cited by proponents of the RRS, suffer from methodological limitations intrinsic to the study design and issues with outcome measurement that may have introduced systematic bias; these factors likely explain the contrast between the generally positive results of the before‐after studies and the negative results of the MERIT trial.

Most early RRS trials used an uncontrolled before‐after study design, as is common in quality improvement studies.42 This study design cannot account for secular trends or other factors, including other QI interventions, that could influence the effect of an intervention.26 The statistically significant reduction in impatient mortality in the control arm of the MERIT trial is an instructive example; this decline could have been a result of the educational intervention on caring for deteriorating patients, other ongoing QI projects at the individual hospitals, or simply random variation during the relatively short (6‐month) follow‐up period. Such factors could also entirely account for the impressive results seen in the initial uncontrolled RRS studies. Nearly all the studies we reviewed also did not discuss any aspects of the hospital context that could influence outcomes for critically ill patients, such as the nurse‐staffing ratio,43 ICU bed availability,4446 overall hospital census,47 or availability of intensivists48 or hospitalists.49 Failure to control foror at least report important aspects ofthe environment in which the intervention was performed is akin to failing to report baseline patient comorbidities or concurrent therapies in a study of a drug's effectiveness.

Our review also suggests how bias in the measurement of clinical outcomes may have contributed to the apparent effect of RRSs. In 1 before‐after study, patients for whom RRS activation resulted in a change code status to do not resuscitate (DNR) were excluded from calculations of mortality,22, 50 resulting in underreporting of mortality after RRS implementation. Disposition of such patients was unclear in 3 other studies.3, 36, 40 Some studies22, 34 defined cardiopulmonary arrest as any activation of the code blue team, regardless of whether the patient was actually in cardiac arrest. This almost inevitably would result in fewer arrests after implementation of the RRS, as the indications for calling the code blue team would be narrower. Finally, nearly all studies used trends in nonobjective primary outcomes (eg, unplanned ICU transfer) to support RRS effects but did not validate any of these outcomes (eg, how often did reviewers agree an ICU transfer was preventable), and none of the assessors of these outcomes were blinded.

Some have attributed the MERIT trial not finding the RRS beneficial to inadequate implementation, as the RRS calling rate of 8.7 calls per 1000 admissions was less than the 15 calls per 1000 admissions cited as optimal in a mature RRS.51 However, published studies generally reported a calling rate of 4‐5 calls per 1000 admissions,22, 23, 37 with only 1 trial reporting a higher calling rate.34

A recent commentary20 and a systematic review of critical care outreach teams21 both addressed the effectiveness of RRSs. We sought to examine the effects of all RRS subtypes and using quantitative analysis and analysis of methodological quality, to determine the overall effect of RRSs. The results of our analysis (which included data from several newer studies31, 38, 39) support and extend the conclusion of prior reviews that RRSs, although a potentially promising intervention, do not unequivocally benefit patients and are not worthy of more widespread use until more evidence becomes available. Our analysis also demonstrates that many studies widely cited as supporting wide implementation of RRSs are flawed and probably not generalizable.

Despite these caveats, RRSs remain an intuitively attractive concept and may be of benefit at some hospitals. Further studies in this area should focus on identifying which patient populations are at high risk for clinical decompensation, identifying the role of clinical structures of care (eg, nurse‐staffing ratio, presence of hospitalists) in preventing adverse outcomes and determining which specific RRS model is most effective. As well, more information is needed about educating bedside staff and RRS team members, as this is likely critical to success of the team. Unfortunately, only the article by King et al.31 provided sufficient detail about the implementation process to assist hospitals in planning an RRS. The remaining articles had only scant details about the intervention and its implementation, a common problem noted in the quality improvement literature.42, 52, 53

Our analysis had several limitations. We attempted to identify as many RRS trials as possible by searching multiple databases and reviewing abstract proceedings, but as the RRS literature is in its infancy, we may not have located other unpublished studies or gray literature. There is no validated system for evaluating the methodological strength of nonrandomized studies; therefore, we assessed study quality on the basis of prespecified criteria for internal and external validity. Finally, we found significant statistical heterogeneity in our quantitative analyses, indicating that the variability between individual studies in treatment effects was greater than that expected by chance. As the primary reasons we conducted a meta‐analysis was to compare the results of before‐after trials with those of the randomized MERIT trial, we did not further explore the reasons for this heterogeneity, although variation in patient populations and RRS structure likely accounts for a significant proportion of the heterogeneity.

Although there is a theoretical basis for implementing a rapid response system, the published literature shows inconsistent benefits to patients and suffers from serious methodological flaws. Future studies of RRSs should attempt to define which patient populations are at risk, the essential characteristics of RRSs, effective implementation strategies, andmost importantwhether any RRS improves clinical outcomes. Until such evidence is available, hospitals should not be mandated to establish an RRS and should consider prioritizing quality improvement resources for interventions with a stronger evidence base.

Acknowledgements

The authors thank Emmanuel King, MD, for graciously providing a copy of his manuscript prior to publication and Alexis Meredith, MD, for providing additional information regarding his study. Dr. Shojania holds a Government of Canada Research Chair in Patient Safety and Quality Improvement.

APPENDIX

0

Literature Search Strategy (Performed through August 2006)
Search terms Citations
1 {(rapid [ti] AND (response [ti] OR resuscitation [ti]) OR (patient at risk [ti])} AND (program [ti] OR team* [ti] OR service* [ti]) 23
2 medical emergency team* [ti] OR medical crisis team* [ti] OR {(critical [ti] OR intensive [ti]) AND care [ti] AND outreach [ti]} 87
3 hospital [ti] AND resuscitation [ti] AND team* [ti] 11
4 medical emergency team* [ab] OR rapid response team [ab] OR medical crisis team* [ab] 89
5 #1 OR #2 OR #3 OR #4 158
6 Resuscitation [mh] OR heart arrest [mh] OR hospital mortality [mh] 72,488
7 (patient care team [mh] OR critical care [mh] OR intensive care units [mh]) AND (patient readmission [mh] OR organization and administration [mh]) 20,321
8 #6 AND #7 1,419
9 {(randomised[ti] OR randomized[ti] OR controlled[ti] OR intervention[ti] OR evaluation[ti] OR comparative[ti] OR effectiveness[ti] OR evaluation[ti] OR feasibility[ti]) AND (trial[ti] OR studies[ti] OR study[ti] OR program[ti] OR design[ti])} OR clinical trial[pt] OR randomized controlled trial[pt] OR epidemiologic studies[mh] OR evaluation studies[mh] OR comparative study[mh] OR feasibility studies[mh] OR intervention studies[mh] OR program evaluation[mh] OR epidemiologic research design[mh] OR systematic5 2,688,847
10 #8 AND #9 748
11 #5 OR #10 806

A medical emergency team1 is a group of clinicians trained to quickly assess and treat hospitalized patients showing acute signs of clinical deterioration. Equivalent terms used are rapid response team,2 critical care outreach team,3 and patient‐at‐risk team.4 A consensus panel5 recently endorsed use of the term rapid response system (RRS) to denote any system that uses a standard set of clinical criteria to summon caregivers to the bedside of a patient who is deemed unstable but not in cardiopulmonary arrest (in which case a standard resuscitation team would be summoned). Such teams primarily evaluate patients on general hospital wards.

RRSs have been developed in response to data indicating that patients frequently demonstrate premonitory signs or receive inadequate care prior to unanticipated intensive care unit (ICU) admission, cardiopulmonary arrest, or death outside the ICU.614 Earlier identification and treatment of such patients could prevent adverse clinical outcomes. The structure of RRSs varies but generally includes a physician and nurse and may also include other staff such as respiratory therapists.5 Teams are summoned by hospital staff to assess patients meeting specific clinical criteria (see box) about whom the bedside staff has significant concern.150

Example of Rapid Response System Calling Criteria for Adult Patients
Any staff member may call the team if 1 of the following criteria is met:
Heart rate > 140/min or < 40/min
Respiratory rate > 28/min or < 8/min
Systolic blood pressure > 180 mmHg or < 90 mm Hg
Oxygen saturation < 90% despite supplementation
Acute change in mental status
Urine output < 50 cc over 4 hours
Staff member has significant concern about patient's condition
Additional criteria used at some institutions:
Chest pain unrelieved by nitroglycerin
Threatened airway
Seizure
Uncontrolled pain

Initial studies of RRSs, performed primarily in Australia and the United Kingdom, showed promising reductions in unanticipated ICU admissions, cardiac arrests, and even overall inpatient mortality.1, 16, 17 The considerable enthusiasm generated by these studies18, 19 resulted in the Institute for Healthcare Improvement (IHI) incorporating RRSs into its 100,000 Lives campaign,2 and RRSs are now being implemented in the more than 3000 U.S. hospitals that joined the campaign. However, a recent commentary on rapid response teams20 and a systematic review of critical care outreach teams21 have raised concerns that this widespread implementation may not be justified by the available evidence. We performed a systematic review of studies of all variations of RRSs in order to determine their effect on patient outcomes and to characterize variations in their organization and implementation.

METHODS

Literature Search and Inclusion and Exclusion Criteria

We systematically searched MEDLINE, CINAHL, and BIOSIS through August 2006 for relevant studies using the various terms for RRSs (eg, medical emergency team, rapid response team, critical care outreach) and medical subject headings relevant to inpatient care and critical illness (eg, patient care team and resuscitation; the full search strategy is given in the Appendix). We also reviewed the abstract lists from the 2004 and 2005 American Thoracic Society and Society of Critical Care Medicine annual meetings and scanned reference lists from key articles.

We screened the abstracts of the articles identified by the search, and 2 independent reviewers abstracted potentially relevant articles using a standardized data abstraction form. Disagreements between the reviewers were resolved by consensus and, if necessary, discussion with a third reviewer. We included randomized controlled trials (RCTs), controlled before‐after studies, and interrupted time series, including simple before‐after studies with no contemporaneous control group, though we planned to separately analyze data from controlled studies if possible. We included only English‐language articles.

On the basis of RRS features in widely cited articles2224 and the recommendations of a recent consensus statement,5 we defined an RRS as having the following characteristics: (1) its primary responsibility is to intervene whenever hospitalized patients become unstable before cardiopulmonary arrest occurs; (2) it must primarily provide care outside the ICU and emergency department; (3) specific clinical criteria must be in place that define instability and trigger a call to the team; and (4) it must be expected to respond within a specified time. We defined these criteria in order to distinguish studies of RRSs from studies of cardiac arrest (code blue) teams or traditional consulting services.

To be included in the analysis, articles had to report the effects of a rapid response system on at least 1 of these outcomes: inpatient mortality, inpatient cardiac arrest, or unscheduled ICU transfer. We used the definitions of cardiac arrest and unscheduled ICU transfer given in the primary studies. In addition to these outcomes, we abstracted information on the number of admissions and the number of RRS calls during the study period. To maximize the comparability of study outcomes, we calculated the rates of mortality, cardiac arrest, unscheduled ICU transfer, and RRS calls per 1000 admissions for studies that did not supply data in this fashion.

Assessment of Study Quality

Quality scoring instruments for studies included in systematic reviews generally focus on randomized controlled trials, which we anticipated would account for a minority of included studies. On the basis of recommendations for the assessment of methodology for nonrandomized study designs,25, 26 we identified and abstracted 4 important determinants of internal validity (Table 1). The consensus statement5 recommends monitoring the effectiveness of RRSs by measuring the rate of unscheduled ICU admissions (defined as an unplanned admission to the ICU from a general ward27) and cardiac arrests of patients who were not listed as do not resuscitate (DNR). As the definition of unscheduled ICU admission allows room for subjectivity, we considered the blinding of assessment of this outcome to study group assignment to be important, especially for retrospective studies. Measurement of cardiac arrests should be less susceptible to blinding issues, but one of the functions of an RRS can be to initiate discussions that result in changes in the goals of care and code status.22 Thus, excluding patients made DNR by the team from cardiac arrest calculations could falsely lower the cardiac arrest rate.

Study Quality Criteria
Quality measures
  • Elements affecting study internal validity and translatability. These elements were chosen based on the methods of the Cochrane collaboration.25 These criteria were not used to determine article inclusion or exclusion.

A. Internal validity
1. Did the study have a contemporaneous control group?
2. If there was no contemporaneous control group, did the study report data for more than 1 time point before and after the intervention?
3. Were nonobjective primary outcomes (eg, unplanned ICU transfer) measured in a blinded fashion?
4. Were patients made DNR by the RRS included in calculations of the cardiac arrest and mortality rates?
B. Generalizability
5. Was the intervention performed independent of other quality improvement interventions targeting the care of critically ill patients?
6. Did the study report the number of admissions and RRS calls during the study period?
7. Did the study report the availability of intensivists before and after the intervention?

We also abstracted 3 separate elements of study quality pertaining to the external validity or generalizability of included studies (Table 1). These elements were defined a priori by consensus reached by discussion among the reviewers. These elements were intended to provide a framework for interpreting the included studies and to guide subgroup analyses. They were not used to form a composite quality score.

Statistical Analysis

We performed a random‐effects meta‐analysis to calculate summary risk ratios with 95% confidence intervals for the effects of RRSs on inpatient mortality, cardiopulmonary arrest, and unscheduled ICU admission. Included in the meta‐analysis were the studies that reported total number of admissions and incidence of each outcome before and after institution of the RRS. For randomized trials that reported pre‐ and postintervention data, we treated the intervention and control groups as separate trials in order to be able to compare their effects with the before‐after trials. For studies that reported results adjusted for clustering (ie, by hospital), we back‐calculated the unadjusted results by multiplying the standard error by the square of the design factor coefficient.28, 29 We calculated the I2 statistic to assess heterogeneity.30 All analyses were performed using Stata version 8.2 (Stata Corporation, College Station, TX).

RESULTS

The database searches identified 861 citations, and 1 additional prepublication study was supplied by the study's first author31; 89 articles underwent full‐text review (Fig. 1). Most studies excluded during the full‐text review did not meet our criteria for a study of an RRS or were observational studies or review articles. For instance, Sebat et al.32 published a study of a shock team at a community hospital that intervened when any patient suffered nontraumatic shock; the study did not meet our inclusion criteria as all patients were admitted to the ICU, most directly from the emergency department. Another frequently cited study, by Bristow et al.,16 was excluded as it was a case‐control study. Thirteen studies,3, 2224, 31, 334011 full‐length studies and 2 abstractsmet all criteria for inclusion.

Figure 1
Article identification and triage trial flow diagram, as recommended by the QUOROM statement for improving the methodological quality of systematic reviews and meta‐analyses.54 Reasons for exclusion were: R1, not a study of a rapid response system; R2, ineligible study design (not simple before‐after study, controlled before‐after study, interrupted time series, or randomized, controlled trial); R3, no eligible outcomes (did not report effect of RRS on in‐hospital cardiac arrest, unscheduled ICU admission, or inpatient mortality); R4, overlapping publication. Data from 1 article55 were pooled with an included article,34 and the other56 was excluded because it contained longer‐term follow‐up data from another included study.23

Characteristics of Included Trials

The characteristics of included studies are outlined in Table 2. Five studies were performed in Australia, 4 in the United States, and 4 in the United Kingdom. All were conducted in academic teaching hospitals. Two studies37, 38 focused on pediatric inpatients, and the remainder involved hospitalized adults. The RRS intervened for all hospitalized patients in all but 2 studies (1 of which focused on surgical inpatients33 and the other in which the RRS evaluated only patients discharged from the ICU3). In 2 studies,31, 39 the RRS was available to evaluate outpatients, such as hospital visitors, in addition to inpatients.0

Studies Included in Meta‐analysis
Study Location Hospital type Patient population RRS composition Measured outcomes Definition of unscheduled ICU admission Definition of cardiac arrest
  • RRS, rapid response system; ICU, intensive care unit; RN, registered nurse; ED, emergency department; DNR, do not resuscitate; NA, not applicable.

Buist et al., 200220 Australia Tertiary‐care academic hospital Adult inpatients ICU registrar DNR status Not supplied Any code blue team activation
Medical registrar Cardiac arrest
ICU nurse Unscheduled ICU admission
Mortality
Bellomo et al., 200321 Australia Tertiary‐care academic hospital Adult inpatients ICU nurse and ICU fellow attended all calls; ICU attending (8 AM‐8 PM) and medical registrar attended if requested Mortality NA Unresponsive, with no pulse or blood pressure, and basic life support initiated
Cardiac arrest
DNR status
Pittard et al., 200322 United Kingdom Tertiary‐care academic hospital Adult inpatients on surgical wards Senior critical care nurses and medical staff Unscheduled ICU admission Any patient not admitted directly from operating theater after elective surgery NA
Bellomo et al., 200431 Australia Tertiary‐care academic hospital Adult postoperative patients ICU attending Mortality Unscheduled ICU admission ICU length of stay DNR status Postoperative admission to ICU because of a clinical complication NA
ICU fellow Unscheduled ICU admission
ICU registered nurse ICU length of stay
Medical fellow DNR status
DeVita et al., 200432 United States Tertiary‐care academic hospital Adult inpatients ICU physician Cardiac arrest NA Any code blue team activation
Anesthesiologist
2 other physicians
2 ICU nurses
Floor nurse
Respiratory therapist
Garcea et al., 20043 United Kingdom Teaching hospital Adult patients discharged from ICU 2 ICU nurses Mortality Emergency readmissions to ICU NA
1 ICU nurse specialist Unscheduled ICU admission
ICU consultant physician (if needed)
Kenward et al., 200438 United Kingdom Teaching hospital Adult inpatients Not stated Mortality NA Loss of spontaneous circulation
Cardiac arrest
Priestley et al., 200433 United Kingdom Teaching hospital Adult inpatients ICU nurse consultant Mortality NA NA
ICU physician available if needed Overall length of stay
Hillman et al., 200534 Australia 23 hospitals (17 academic, 6 nonacademic) Adult inpatients Varied between study hospitals; required to have at least 1 MD and 1 RN from ED or ICU Mortality Any unscheduled admission to ICU from general ward No palpable pulse
Cardiac arrest
Unscheduled ICU admission
DNR status
Hunt et al., 2005 (abstract)36 United States Pediatric tertiary‐care academic hospital Pediatric inpatients Not provided Cardiac arrest NA Not provided
Meredith et al., 2005 (abstract)37 United States Tertiary‐care academic hospital Adult inpatients, outpatients, and visitors ICU registered nurse Mortality NA Not provided
Respiratory therapist
Tibballs et al., 200535 Australia Pediatric tertiary‐care academic hospital Pediatric inpatients ICU physician Cardiac arrest Not supplied Not provided
Medical registrar Unscheduled ICU admission
ED physician
ICU nurse
King et al., 200629 United States Tertiary‐care academic hospital Adult inpatients, outpatients, and visitors Hospitalist Cardiac arrest NA Not provided
Internal medicine resident
ICU nurse
Ward nurse
Pharmacist

RRS Structure, Calling Criteria, and Responsibilities

Seven studies22, 23, 31, 33, 34, 36, 37 that described the team composition used variants of the medical emergency team model, a physician‐led team (Table 2). In 6 of these 7 studies, the team included a critical care physician (attending or fellow) and an ICU nurse; in the sole RCT (the MERIT study36), the team structure varied between hospitals, consisting of a nurse and physician from either the emergency department or ICU. Hospitalists, who are involved in RRS responses at many U.S. hospitals, were primary team leaders of the RRS in only 1 study.31 In 2 studies34, 37 the RRS also responded to code blue calls, and in 4 studies23, 31, 33, 39 the RRS and the code blue team had separate personnel; the remaining studies did not define the distinction between RRS and code blue team.

Factors Affecting Internal Validity and Generalizability of Studies Included in Meta‐analysis
Study Contemporaneous control group Data reported at more than 1 time before/after intervention RRS calling rate reported Outcomes analysis included patients made DNR by team Blind measurement of nonobjective outcomes Intensivist always available Other QI efforts during study
  • SBA, simple before‐after (quasi‐experimental) study; ITS, interrupted time series; RCT, randomized controlled trial; NA, not applicable; NR, not reported; APLS, advanced pediatric life support

Buist et al., 200220 No No Yes No (mortality) No NR NR
Bellomo et al., 200321 No No Yes Yes (mortality) NA Yes (ICU fellow) No
Pittard et al., 200322 No No Yes NA No NR NR
Bellomo et al., 200431 No No Yes Yes (mortality) No Yes (ICU fellow) No
DeVita et al., 200432 No Yes Yes NA No Yes (critical care attending physician) NR
Garcea et al., 200433 No No No Unclear No NR NR
Kenward et al., 200438 No No Yes Unclear No NR NR
Priestley et al., 200433 No (interrupted time series) Yes No NA No NR NR
Hillman et al., 200534 Yes No Yes Unclear Yes NR No
Hunt et al., 2005 (abstract)36 No No Yes NA NR NR NR
Meredith et al., 2005 (abstract)37 No No Yes No NA No No
Tibballs et al., 200535 No No Yes Unclear No NR Yes (educational workshops/more training in APLS)
King et al., 200629 No Yes Yes NA No Yes No

In 4 studies the RRSs were led by nurses. One study published in abstract form39 used the rapid response team model, consisting of a critical care nurse and a respiratory therapist, with assistance as needed from the primary medical staff and a critical care physician. Three studies3, 24, 35 from UK hospitals used the critical care outreach (CCO) model, in which ICU‐trained nurses respond initially with assistance from intensivists. The CCO model also involves follow‐up on patients discharged from the ICU and proactive rounding on unstable ward patients.

The hospitals used broadly similar approaches to determining when to summon the RRS, relying on combinations of objective clinical criteria (eg, vital sign abnormalities) and subjective criteria (eg, acute mental status change, staff member concerned about patient's condition). Three studies3, 24, 35 used a formal clinical score (the Patient‐At‐Risk score or the Modified Early Warning score) to trigger calls to the RRS. Three studies, 2 of them from the same institution,23, 33 reported the frequency of specific triggers for RRS activation. Concern by bedside staff and respiratory distress were the most frequent activators of the RRS.

Study Internal Validity and Generalizability

One study,36 the MERIT trial, conducted in Australia, was a cluster‐randomized RCT (randomized by hospital) that adhered to recommended elements of design and reporting for studies of this type.41 In this study, hospitals in the control group received an educational intervention on caring for deteriorating patients only; hospitals in the intervention group received the educational module and started an RRS. An additional study35 identified itself as a randomized trial, but randomization occurred at the hospital ward level, with introduction of the intervention (critical care outreach) staggered so that at different points an individual ward could have been in either the control or intervention group; therefore, this study was considered an interrupted time series. All other trials included were before‐after studies with no contemporaneous control group

Most studies did not meet criteria for internal validity or generalizability (Table 2). Two studies3, 35 did not report the number of RRS calls during the study period. One study22 omitted patients whose resuscitation status was changed after RRS evaluation from the calculation of inpatient mortality; thus, the patients who had been made do not resuscitate by the RRS did not contribute to the calculated mortality rate. The disposition of these patients was unclear in another study.36 All studies measured clinical outcomes retrospectively, and no studies reported blinding of outcomes assessors for nonobjective outcomes (eg, unplanned ICU admission). Studies generally did not report on the availability of intensivists or if other quality improvement interventions targeting critically ill patients were implemented along with the RRS.

RRS Usage and Effects on Patient Outcomes

Seven studies2224, 34, 3638 reported enough information to calculate the RRS calling rate (4 studies24, 31, 39, 40 reported the total number of calls but not the number of admissions, and 2 studies3, 35 did not report either). In these 7 studies, the calling rate varied from 4.5 to 25.8 calls per 1000 admissions. Three studies documented the calling rate before and after the intervention: a study at a hospital with a preexisting RRS34 reported that the calling rate increased from 13.7 to 25.8 calls per 1000 admissions after an intensive education and publicity program; in a pediatric trial,38 the overall emergency calling rate (for cardiac arrests and medical emergencies) was reported to increase from 6.6 to 10.4 per 1000 admissions; and in the MERIT trial,36 calls increased from 3.1 to 8.7 per 1000 admissions.

Effects of RRS on Clinical Outcomes

Nine studies3, 22, 23, 33, 3537, 39, 40 reported the effect of an RRS on inpatient mortality, 9 studies22, 23, 31, 33, 34, 3638, 40 reported its effect on cardiopulmonary arrests, and 6 studies3, 22, 24, 33, 36, 37 reported its effect on unscheduled ICU admissions. Of these, 7 trials that reported mortality and cardiopulmonary arrests and 6 studies that reported unscheduled ICU admissions supplied sufficient data for meta‐analysis.

Observational studies demonstrated improvement in inpatient mortality, with a summary risk ratio of 0.82 (95% CI: 0.74‐0.91, heterogeneity I2 62.1%; Fig. 2). However, the magnitude of these improvements was very similar to that seen in the control group of the MERIT trial (RR 0.73, 95% CI: 0.53‐1.02). The intervention group of the MERIT trial also demonstrated a reduction in mortality that was not significantly different from that of the control group (RR 0.65, 95% CI: 0.48‐0.87). We found a similar pattern in studies reporting RRS effects on cardiopulmonary arrests (Fig. 3). The observational studies did not show any effect on the risk of unscheduled ICU admissions (summary RR 1.08, 95% CI: 0.96‐1.22, heterogeneity I2 79.1%) nor did the MERIT trial (Fig. 4).

Figure 2
Effect of RRS on inpatient mortality The forest plot compares the relative risk of mortality after implementation of RRS with that before RRS implementation. For the MERIT trial, we treated the 2 study arms (intervention and control) as separate before‐after trials in order to compare with the observational studies. The study by Garcea et al.3 evaluated the effect of RRS on readmission to the ICU. The supplied outcomes are for in‐hospital mortality of patients readmitted to the ICU only; thus, the baseline mortality rate is not reported. The study by Bellomo et al. (2004)33 evaluated the effect of RRS on postoperative patients only. The other study performed at the same institution and published in 200323 reported outcomes of all inpatients. Therefore, we subtracted the results of the 2004 study from those reported in the 2003 study to avoid counting the same outcomes twice (RR, relative risk; NR, not reported; NA, not applicable).
Figure 3
Effect of RRS on cardiopulmonary arrests The forest plot shows the relative risk of cardiopulmonary arrest after implementation of RRS. As in Figure 1, the MERIT trial intervention and control groups were treated as separate before‐after trials.
Figure 4
Effect of RRS on unscheduled ICU admissions The forest plot shows the relative risk of an unscheduled ICU admission after implementation of RRS. As shown in Figures 1 and 2, the MERIT trial intervention and control groups were treated as separate before‐after trials. The study by Garcea et al.3 evaluated the effect of RRS on readmissions to ICU. The supplied outcomes are for unscheduled readmissions to ICU; thus, the baseline unscheduled ICU admission rate is not reported.

DISCUSSION

Despite the strong face validity of the RRS concept, the current literature on medical emergency teams, rapid response teams, and critical care outreach suffers from substantial flaws that make it difficult to determine the effect of an RRS on patient outcomes. These flaws include the use of suboptimal study designs, failure to report important cointerventions, the methods in which outcomes were defined, and lack of verification of the validity of the outcomes measured. As a result, very little empiric data are available to define the effectiveness of RRSs or to provide guidance for hospitals planning to implement an RRS.

Though early studies reported that RRSs appeared to reduce mortality and cardiac arrest rates, the sole randomized trial of an RRS (the MERIT trial36) showed no differences between intervention and control hospitals for any clinical outcome. Both inpatient mortality and cardiac arrest rates declined in the intervention and control groups of the MERIT trial, and the reductions in these outcomes in observational trials were similar to those seen in the MERIT control group. This strongly implies that other factors besides the RRS were responsible for the results of previous before‐after studies. These studies, which have been widely cited by proponents of the RRS, suffer from methodological limitations intrinsic to the study design and issues with outcome measurement that may have introduced systematic bias; these factors likely explain the contrast between the generally positive results of the before‐after studies and the negative results of the MERIT trial.

Most early RRS trials used an uncontrolled before‐after study design, as is common in quality improvement studies.42 This study design cannot account for secular trends or other factors, including other QI interventions, that could influence the effect of an intervention.26 The statistically significant reduction in impatient mortality in the control arm of the MERIT trial is an instructive example; this decline could have been a result of the educational intervention on caring for deteriorating patients, other ongoing QI projects at the individual hospitals, or simply random variation during the relatively short (6‐month) follow‐up period. Such factors could also entirely account for the impressive results seen in the initial uncontrolled RRS studies. Nearly all the studies we reviewed also did not discuss any aspects of the hospital context that could influence outcomes for critically ill patients, such as the nurse‐staffing ratio,43 ICU bed availability,4446 overall hospital census,47 or availability of intensivists48 or hospitalists.49 Failure to control foror at least report important aspects ofthe environment in which the intervention was performed is akin to failing to report baseline patient comorbidities or concurrent therapies in a study of a drug's effectiveness.

Our review also suggests how bias in the measurement of clinical outcomes may have contributed to the apparent effect of RRSs. In 1 before‐after study, patients for whom RRS activation resulted in a change code status to do not resuscitate (DNR) were excluded from calculations of mortality,22, 50 resulting in underreporting of mortality after RRS implementation. Disposition of such patients was unclear in 3 other studies.3, 36, 40 Some studies22, 34 defined cardiopulmonary arrest as any activation of the code blue team, regardless of whether the patient was actually in cardiac arrest. This almost inevitably would result in fewer arrests after implementation of the RRS, as the indications for calling the code blue team would be narrower. Finally, nearly all studies used trends in nonobjective primary outcomes (eg, unplanned ICU transfer) to support RRS effects but did not validate any of these outcomes (eg, how often did reviewers agree an ICU transfer was preventable), and none of the assessors of these outcomes were blinded.

Some have attributed the MERIT trial not finding the RRS beneficial to inadequate implementation, as the RRS calling rate of 8.7 calls per 1000 admissions was less than the 15 calls per 1000 admissions cited as optimal in a mature RRS.51 However, published studies generally reported a calling rate of 4‐5 calls per 1000 admissions,22, 23, 37 with only 1 trial reporting a higher calling rate.34

A recent commentary20 and a systematic review of critical care outreach teams21 both addressed the effectiveness of RRSs. We sought to examine the effects of all RRS subtypes and using quantitative analysis and analysis of methodological quality, to determine the overall effect of RRSs. The results of our analysis (which included data from several newer studies31, 38, 39) support and extend the conclusion of prior reviews that RRSs, although a potentially promising intervention, do not unequivocally benefit patients and are not worthy of more widespread use until more evidence becomes available. Our analysis also demonstrates that many studies widely cited as supporting wide implementation of RRSs are flawed and probably not generalizable.

Despite these caveats, RRSs remain an intuitively attractive concept and may be of benefit at some hospitals. Further studies in this area should focus on identifying which patient populations are at high risk for clinical decompensation, identifying the role of clinical structures of care (eg, nurse‐staffing ratio, presence of hospitalists) in preventing adverse outcomes and determining which specific RRS model is most effective. As well, more information is needed about educating bedside staff and RRS team members, as this is likely critical to success of the team. Unfortunately, only the article by King et al.31 provided sufficient detail about the implementation process to assist hospitals in planning an RRS. The remaining articles had only scant details about the intervention and its implementation, a common problem noted in the quality improvement literature.42, 52, 53

Our analysis had several limitations. We attempted to identify as many RRS trials as possible by searching multiple databases and reviewing abstract proceedings, but as the RRS literature is in its infancy, we may not have located other unpublished studies or gray literature. There is no validated system for evaluating the methodological strength of nonrandomized studies; therefore, we assessed study quality on the basis of prespecified criteria for internal and external validity. Finally, we found significant statistical heterogeneity in our quantitative analyses, indicating that the variability between individual studies in treatment effects was greater than that expected by chance. As the primary reasons we conducted a meta‐analysis was to compare the results of before‐after trials with those of the randomized MERIT trial, we did not further explore the reasons for this heterogeneity, although variation in patient populations and RRS structure likely accounts for a significant proportion of the heterogeneity.

Although there is a theoretical basis for implementing a rapid response system, the published literature shows inconsistent benefits to patients and suffers from serious methodological flaws. Future studies of RRSs should attempt to define which patient populations are at risk, the essential characteristics of RRSs, effective implementation strategies, andmost importantwhether any RRS improves clinical outcomes. Until such evidence is available, hospitals should not be mandated to establish an RRS and should consider prioritizing quality improvement resources for interventions with a stronger evidence base.

Acknowledgements

The authors thank Emmanuel King, MD, for graciously providing a copy of his manuscript prior to publication and Alexis Meredith, MD, for providing additional information regarding his study. Dr. Shojania holds a Government of Canada Research Chair in Patient Safety and Quality Improvement.

APPENDIX

0

Literature Search Strategy (Performed through August 2006)
Search terms Citations
1 {(rapid [ti] AND (response [ti] OR resuscitation [ti]) OR (patient at risk [ti])} AND (program [ti] OR team* [ti] OR service* [ti]) 23
2 medical emergency team* [ti] OR medical crisis team* [ti] OR {(critical [ti] OR intensive [ti]) AND care [ti] AND outreach [ti]} 87
3 hospital [ti] AND resuscitation [ti] AND team* [ti] 11
4 medical emergency team* [ab] OR rapid response team [ab] OR medical crisis team* [ab] 89
5 #1 OR #2 OR #3 OR #4 158
6 Resuscitation [mh] OR heart arrest [mh] OR hospital mortality [mh] 72,488
7 (patient care team [mh] OR critical care [mh] OR intensive care units [mh]) AND (patient readmission [mh] OR organization and administration [mh]) 20,321
8 #6 AND #7 1,419
9 {(randomised[ti] OR randomized[ti] OR controlled[ti] OR intervention[ti] OR evaluation[ti] OR comparative[ti] OR effectiveness[ti] OR evaluation[ti] OR feasibility[ti]) AND (trial[ti] OR studies[ti] OR study[ti] OR program[ti] OR design[ti])} OR clinical trial[pt] OR randomized controlled trial[pt] OR epidemiologic studies[mh] OR evaluation studies[mh] OR comparative study[mh] OR feasibility studies[mh] OR intervention studies[mh] OR program evaluation[mh] OR epidemiologic research design[mh] OR systematic5 2,688,847
10 #8 AND #9 748
11 #5 OR #10 806
References
  1. Lee A,Bishop G,Hillman KM,Daffurn K.The medical emergency team.Anaesth Intensive Care.1995;23(2):183186.
  2. Berwick DM,Calkins DR,McCannon CJ,Hackbarth AD.The 100 000 Lives Campaign: setting a goal and a deadline for improving health care quality.JAMA.2006;295(3):3247.
  3. Garcea G,Thomasset S,McClelland L,Leslie A,Berry DP.Impact of a critical care outreach team on critical care readmissions and mortality.Acta Anaesthesiol Scand2004;48:10961100.
  4. Fletcher SJ,Flabouris A.The patient‐at‐risk team.Anaesthesia.2000;55(2):198.
  5. Devita MA,Bellomo R,Hillman K, et al.Findings of the First Consensus Conference on Medical Emergency Teams.Crit Care Med.2006;34:24632478.
  6. Cioffi J.Recognition of patients who require emergency assistance: a descriptive study.Heart Lung.2000;29(4):262268.
  7. Hillman KM,Bristow PJ,Chey T, et al.Antecedents to hospital deaths.Intern Med J.2001;31:343348.
  8. Hillman KM,Bristow PJ,Chey T, et al.Duration of life‐threatening antecedents prior to intensive care admission.Intensive Care Med.2002;28:16291634.
  9. Hodgetts TJ,Kenward G,Vlachonikolis IG,Payne S,Castle N.The identification of risk factors for cardiac arrest and formulation of activation criteria to alert a medical emergency team.Resuscitation.2002;54:125131.
  10. Kause J,Smith G,Prytherch D,Parr M,Flabouris A,Hillman K.A comparison of antecedents to cardiac arrests, deaths and emergency intensive care admissions in Australia and New Zealand, and the United Kingdom—the ACADEMIA study.Resuscitation.2004;62:275282.
  11. Subbe CP,Kruger M,Rutherford P,Gemmel L.Validation of a modified Early Warning Score in medical admissions.QJM.2001;94:521526.
  12. Young MP,Gooder VJ,McBride K,James B,Fisher ES.Inpatient transfers to the intensive care unit: delays are associated with increased mortality and morbidity.J Gen Intern Med.2003;18(2):7783.
  13. Schein RM,Hazday N,Pena M,Ruben BH,Sprung CL.Clinical antecedents to in‐hospital cardiopulmonary arrest.Chest.1990;98:13881392.
  14. Franklin C,Mathew J.Developing strategies to prevent inhospital cardiac arrest: analyzing responses of physicians and nurses in the hours before the event.Crit Care Med.1994;22(2):244247.
  15. DeVita MA,Bellomo R,Hillman K.Introduction to the rapid response systems series.Jt Comm J Qual Patient Saf.2006;32:359360.
  16. Bristow PJ,Hillman KM,Chey T, et al.Rates of in‐hospital arrests, deaths and intensive care admissions: the effect of a medical emergency team.Med J Aust.2000;173:236240.
  17. Goldhill DR,Worthington L,Mulcahy A,Tarling M,Sumner A.The patient‐at‐risk team: identifying and managing seriously ill ward patients.Anaesthesia.1999;54:853860.
  18. Kerridge RK.The medical emergency team: no evidence to justify not implementing change.Med J Aust.2000;173:228229.
  19. Kerridge RK,Saul WP.The medical emergency team, evidence‐based medicine and ethics.Med J Aust.2003;179:313315.
  20. Winters BD,Pham J,Pronovost PJ.Rapid response teams—walk, don't run.JAMA.2006;296:16451647.
  21. Esmonde L,McDonnell A,Ball C, et al.Investigating the effectiveness of critical care outreach services: a systematic review.Intensive Care Med.2006;32:17131721.
  22. Buist MD,Moore GE,Bernard SA,Waxman BP,Anderson JN,Nguyen TV.Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: preliminary study.BMJ.2002;324:387390.
  23. Bellomo R,Goldsmith D,Uchino S, et al.A prospective before‐and‐after trial of a medical emergency team.Med J Aust.2003;179:283287.
  24. Pittard AJ.Out of our reach? Assessing the impact of introducing a critical care outreach service.Anaesthesia.2003;58:882885.
  25. Cochrane Collaboration Effective Practice and Organisation of Care group. Available at: http://www.epoc.uottawa.ca/inttime.pdf.Accessed August 4,2006.
  26. Shadish W,Cook T,Campbell D.Experimental and Quasi‐Experimental Designs for Generalized Causal Inference.Boston, MA:Houghton Mifflin;2002.
  27. Cretikos M,Parr M,Hillman K, et al.Guidelines for the uniform reporting of data for Medical Emergency Teams.Resuscitation.2006;68(1):1125.
  28. Kerry SM,Bland JM.The intracluster correlation coefficient in cluster randomisation.BMJ.1998;316:1455.
  29. Donner A,Klar N.Issues in the meta‐analysis of cluster randomized trials.Stat Med.2002;21:29712980.
  30. Higgins JP,Thompson SG,Deeks JJ,Altman DG.Measuring inconsistency in meta‐analyses.BMJ.2003;327:557560.
  31. King E,Horvath R,Shulkin D.Establishing a rapid response team (RRT) in an academic hospital: one year's experience.J Hosp Med.2006;1:296305.
  32. Sebat F,Johnson D,Musthafa AA, et al.A multidisciplinary community hospital program for early and rapid resuscitation of shock in nontrauma patients.Chest.2005;127:17291743.
  33. Bellomo R,Goldsmith D,Uchino S, et al.Prospective controlled trial of effect of medical emergency team on postoperative morbidity and mortality rates.Crit Care Med.2004;32:916921.
  34. DeVita MA,Braithwaite RS,Mahidhara R,Stuart S,Foraida M,Simmons RL.Use of medical emergency team responses to reduce hospital cardiopulmonary arrests.Qual Saf Health Care.2004;13:251254.
  35. Priestley G,Watson W,Rashidian A, et al.Introducing critical care outreach: a ward‐randomised trial of phased introduction in a general hospital.Intensive Care Med.2004;30:13981404.
  36. Hillman K,Chen J,Cretikos M, et al.Introduction of the medical emergency team (MET) system: a cluster‐randomised controlled trial.Lancet.2005;365:20912097.
  37. Tibballs J,Kinney S,Duke T,Oakley E,Hennessy M.Reduction of paediatric in‐patient cardiac arrest and death with a medical emergency team: preliminary results.Arch Dis Child.2005;90:11481152.
  38. Hunt EA,Shilkofski N,Rinke ML, et al.The effect of transition from a traditional code team to a rapid response team in a children's center: a before and after intervention trial [abstract].Crit Care Med.2005;33(12 suppl):A17.
  39. Meredith A,Simpson SQ,Cleek C,Williamson T,O'Brien‐Ladner A.Improved hospital mortality by institution of a rapid response team in a university hospital.Chest.2005;128(suppl S):182S.
  40. Kenward G,Castle N,Hodgetts T,Shaikh L.Evaluation of a medical emergency team one year after implementation.Resuscitation.2004;61(3):257263.
  41. Campbell MK,Elbourne DR,Altman DG.CONSORT statement: extension to cluster randomised trials.BMJ.2004;328:702708.
  42. Shojania KG,Grimshaw JM.Evidence‐Based Quality Improvement: The State Of The Science.Health Aff.2005;24(1):138150.
  43. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  44. Sprung CL,Geber D,Eidelman LA, et al.Evaluation of triage decisions for intensive care admission.Crit Care Med.1999;27:10731079.
  45. Strauss MJ,LoGerfo JP,Yeltatzie JA,Temkin N,Hudson LD.Rationing of intensive care unit services. An everyday occurrence.JAMA.1986;255:11431146.
  46. Selker HP,Griffith JL,Dorey FJ,D'Agostino RB.How do physicians adapt when the coronary care unit is full? A prospective multicenter study.JAMA.1987;257:11811185.
  47. Sprivulis PC,Da Silva JA,Jacobs IG,Frazer AR,Jelinek GA.The association between hospital overcrowding and mortality among patients admitted via Western Australian emergency departments.Med J Aust.2006;184:208212.
  48. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288:21512'62.
  49. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137:859865.
  50. Subbe CP.Critical care outreach team's effect on patient outcome: other conclusions are possible.BMJ.2004;328:347; author reply
  51. The “MERIT” Trial of medical emergency teams in Australia: an analysis of findings and implications for the 100,000 Lives Campaign. Institute for Healthcare Improvement,2006. Available at: http://www.ihi.org/NR/rdonlyres/F3401FEF‐2179‐4403‐8F67–B9255C57E207/0/LancetAnalysis81505.pdf. Accessed August 17, 2006.
  52. Grimshaw J,Eccles M,Thomas R, et al.Toward evidence‐based quality improvement. evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966‐1998.J Gen Intern Med.2006;21(suppl 2):S14S20.
  53. Hagedorn H,Hogan M,Smith JL, et al.lessons learned about implementing research evidence into clinical practice. Experiences from VA QUERI.J Gen Intern Med.2006;21(suppl 2):S21S24.
  54. Moher D,Cook DJ,Eastwood S,Olkin I,Rennie D,Stroup DF.Improving the quality of reports of meta‐analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta‐analyses.Lancet.1999;354:18961900.
  55. Foraida MI,DeVita MA,Braithwaite RS,Stuart SA,Brooks MM,Simmons RL.Improving the utilization of medical crisis teams (Condition C) at an urban tertiary care hospital.J Crit Care.2003;18(2):8794.
  56. Jones D,Bellomo R,Bates S, et al.Long term effect of a medical emergency team on cardiac arrests in a teaching hospital.Crit Care.2005;9:R808R815.
References
  1. Lee A,Bishop G,Hillman KM,Daffurn K.The medical emergency team.Anaesth Intensive Care.1995;23(2):183186.
  2. Berwick DM,Calkins DR,McCannon CJ,Hackbarth AD.The 100 000 Lives Campaign: setting a goal and a deadline for improving health care quality.JAMA.2006;295(3):3247.
  3. Garcea G,Thomasset S,McClelland L,Leslie A,Berry DP.Impact of a critical care outreach team on critical care readmissions and mortality.Acta Anaesthesiol Scand2004;48:10961100.
  4. Fletcher SJ,Flabouris A.The patient‐at‐risk team.Anaesthesia.2000;55(2):198.
  5. Devita MA,Bellomo R,Hillman K, et al.Findings of the First Consensus Conference on Medical Emergency Teams.Crit Care Med.2006;34:24632478.
  6. Cioffi J.Recognition of patients who require emergency assistance: a descriptive study.Heart Lung.2000;29(4):262268.
  7. Hillman KM,Bristow PJ,Chey T, et al.Antecedents to hospital deaths.Intern Med J.2001;31:343348.
  8. Hillman KM,Bristow PJ,Chey T, et al.Duration of life‐threatening antecedents prior to intensive care admission.Intensive Care Med.2002;28:16291634.
  9. Hodgetts TJ,Kenward G,Vlachonikolis IG,Payne S,Castle N.The identification of risk factors for cardiac arrest and formulation of activation criteria to alert a medical emergency team.Resuscitation.2002;54:125131.
  10. Kause J,Smith G,Prytherch D,Parr M,Flabouris A,Hillman K.A comparison of antecedents to cardiac arrests, deaths and emergency intensive care admissions in Australia and New Zealand, and the United Kingdom—the ACADEMIA study.Resuscitation.2004;62:275282.
  11. Subbe CP,Kruger M,Rutherford P,Gemmel L.Validation of a modified Early Warning Score in medical admissions.QJM.2001;94:521526.
  12. Young MP,Gooder VJ,McBride K,James B,Fisher ES.Inpatient transfers to the intensive care unit: delays are associated with increased mortality and morbidity.J Gen Intern Med.2003;18(2):7783.
  13. Schein RM,Hazday N,Pena M,Ruben BH,Sprung CL.Clinical antecedents to in‐hospital cardiopulmonary arrest.Chest.1990;98:13881392.
  14. Franklin C,Mathew J.Developing strategies to prevent inhospital cardiac arrest: analyzing responses of physicians and nurses in the hours before the event.Crit Care Med.1994;22(2):244247.
  15. DeVita MA,Bellomo R,Hillman K.Introduction to the rapid response systems series.Jt Comm J Qual Patient Saf.2006;32:359360.
  16. Bristow PJ,Hillman KM,Chey T, et al.Rates of in‐hospital arrests, deaths and intensive care admissions: the effect of a medical emergency team.Med J Aust.2000;173:236240.
  17. Goldhill DR,Worthington L,Mulcahy A,Tarling M,Sumner A.The patient‐at‐risk team: identifying and managing seriously ill ward patients.Anaesthesia.1999;54:853860.
  18. Kerridge RK.The medical emergency team: no evidence to justify not implementing change.Med J Aust.2000;173:228229.
  19. Kerridge RK,Saul WP.The medical emergency team, evidence‐based medicine and ethics.Med J Aust.2003;179:313315.
  20. Winters BD,Pham J,Pronovost PJ.Rapid response teams—walk, don't run.JAMA.2006;296:16451647.
  21. Esmonde L,McDonnell A,Ball C, et al.Investigating the effectiveness of critical care outreach services: a systematic review.Intensive Care Med.2006;32:17131721.
  22. Buist MD,Moore GE,Bernard SA,Waxman BP,Anderson JN,Nguyen TV.Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: preliminary study.BMJ.2002;324:387390.
  23. Bellomo R,Goldsmith D,Uchino S, et al.A prospective before‐and‐after trial of a medical emergency team.Med J Aust.2003;179:283287.
  24. Pittard AJ.Out of our reach? Assessing the impact of introducing a critical care outreach service.Anaesthesia.2003;58:882885.
  25. Cochrane Collaboration Effective Practice and Organisation of Care group. Available at: http://www.epoc.uottawa.ca/inttime.pdf.Accessed August 4,2006.
  26. Shadish W,Cook T,Campbell D.Experimental and Quasi‐Experimental Designs for Generalized Causal Inference.Boston, MA:Houghton Mifflin;2002.
  27. Cretikos M,Parr M,Hillman K, et al.Guidelines for the uniform reporting of data for Medical Emergency Teams.Resuscitation.2006;68(1):1125.
  28. Kerry SM,Bland JM.The intracluster correlation coefficient in cluster randomisation.BMJ.1998;316:1455.
  29. Donner A,Klar N.Issues in the meta‐analysis of cluster randomized trials.Stat Med.2002;21:29712980.
  30. Higgins JP,Thompson SG,Deeks JJ,Altman DG.Measuring inconsistency in meta‐analyses.BMJ.2003;327:557560.
  31. King E,Horvath R,Shulkin D.Establishing a rapid response team (RRT) in an academic hospital: one year's experience.J Hosp Med.2006;1:296305.
  32. Sebat F,Johnson D,Musthafa AA, et al.A multidisciplinary community hospital program for early and rapid resuscitation of shock in nontrauma patients.Chest.2005;127:17291743.
  33. Bellomo R,Goldsmith D,Uchino S, et al.Prospective controlled trial of effect of medical emergency team on postoperative morbidity and mortality rates.Crit Care Med.2004;32:916921.
  34. DeVita MA,Braithwaite RS,Mahidhara R,Stuart S,Foraida M,Simmons RL.Use of medical emergency team responses to reduce hospital cardiopulmonary arrests.Qual Saf Health Care.2004;13:251254.
  35. Priestley G,Watson W,Rashidian A, et al.Introducing critical care outreach: a ward‐randomised trial of phased introduction in a general hospital.Intensive Care Med.2004;30:13981404.
  36. Hillman K,Chen J,Cretikos M, et al.Introduction of the medical emergency team (MET) system: a cluster‐randomised controlled trial.Lancet.2005;365:20912097.
  37. Tibballs J,Kinney S,Duke T,Oakley E,Hennessy M.Reduction of paediatric in‐patient cardiac arrest and death with a medical emergency team: preliminary results.Arch Dis Child.2005;90:11481152.
  38. Hunt EA,Shilkofski N,Rinke ML, et al.The effect of transition from a traditional code team to a rapid response team in a children's center: a before and after intervention trial [abstract].Crit Care Med.2005;33(12 suppl):A17.
  39. Meredith A,Simpson SQ,Cleek C,Williamson T,O'Brien‐Ladner A.Improved hospital mortality by institution of a rapid response team in a university hospital.Chest.2005;128(suppl S):182S.
  40. Kenward G,Castle N,Hodgetts T,Shaikh L.Evaluation of a medical emergency team one year after implementation.Resuscitation.2004;61(3):257263.
  41. Campbell MK,Elbourne DR,Altman DG.CONSORT statement: extension to cluster randomised trials.BMJ.2004;328:702708.
  42. Shojania KG,Grimshaw JM.Evidence‐Based Quality Improvement: The State Of The Science.Health Aff.2005;24(1):138150.
  43. Aiken LH,Clarke SP,Sloane DM,Sochalski J,Silber JH.Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction.JAMA.2002;288:19871993.
  44. Sprung CL,Geber D,Eidelman LA, et al.Evaluation of triage decisions for intensive care admission.Crit Care Med.1999;27:10731079.
  45. Strauss MJ,LoGerfo JP,Yeltatzie JA,Temkin N,Hudson LD.Rationing of intensive care unit services. An everyday occurrence.JAMA.1986;255:11431146.
  46. Selker HP,Griffith JL,Dorey FJ,D'Agostino RB.How do physicians adapt when the coronary care unit is full? A prospective multicenter study.JAMA.1987;257:11811185.
  47. Sprivulis PC,Da Silva JA,Jacobs IG,Frazer AR,Jelinek GA.The association between hospital overcrowding and mortality among patients admitted via Western Australian emergency departments.Med J Aust.2006;184:208212.
  48. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288:21512'62.
  49. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med.2002;137:859865.
  50. Subbe CP.Critical care outreach team's effect on patient outcome: other conclusions are possible.BMJ.2004;328:347; author reply
  51. The “MERIT” Trial of medical emergency teams in Australia: an analysis of findings and implications for the 100,000 Lives Campaign. Institute for Healthcare Improvement,2006. Available at: http://www.ihi.org/NR/rdonlyres/F3401FEF‐2179‐4403‐8F67–B9255C57E207/0/LancetAnalysis81505.pdf. Accessed August 17, 2006.
  52. Grimshaw J,Eccles M,Thomas R, et al.Toward evidence‐based quality improvement. evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966‐1998.J Gen Intern Med.2006;21(suppl 2):S14S20.
  53. Hagedorn H,Hogan M,Smith JL, et al.lessons learned about implementing research evidence into clinical practice. Experiences from VA QUERI.J Gen Intern Med.2006;21(suppl 2):S21S24.
  54. Moher D,Cook DJ,Eastwood S,Olkin I,Rennie D,Stroup DF.Improving the quality of reports of meta‐analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta‐analyses.Lancet.1999;354:18961900.
  55. Foraida MI,DeVita MA,Braithwaite RS,Stuart SA,Brooks MM,Simmons RL.Improving the utilization of medical crisis teams (Condition C) at an urban tertiary care hospital.J Crit Care.2003;18(2):8794.
  56. Jones D,Bellomo R,Bates S, et al.Long term effect of a medical emergency team on cardiac arrests in a teaching hospital.Crit Care.2005;9:R808R815.
Issue
Journal of Hospital Medicine - 2(6)
Issue
Journal of Hospital Medicine - 2(6)
Page Number
422-432
Page Number
422-432
Publications
Publications
Article Type
Display Headline
Effects of rapid response systems on clinical outcomes: Systematic review and meta‐analysis
Display Headline
Effects of rapid response systems on clinical outcomes: Systematic review and meta‐analysis
Legacy Keywords
systematic review, rapid response systems
Legacy Keywords
systematic review, rapid response systems
Sections
Article Source
Copyright © 2007 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Hospitalist Group, Department of Medicine, University of California San Francisco, 533 Parnassus Avenue, Box 0131, San Francisco, CA 94143‐0131; Fax (415) 514‐2094
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Impact of CT on PE Diagnosis

Article Type
Changed
Sun, 05/28/2017 - 22:53
Display Headline
Impact of reliance on CT pulmonary angiography on diagnosis of pulmonary embolism: A Bayesian analysis

Spiral computed tomographic pulmonary angiography (CTPA) is a common first‐line test for the evaluation of suspected pulmonary embolism (PE). At our institution CTPA became the initial diagnostic study in 83% of patients with suspected PE within 3 years of the introduction of CT,1 and by 2001 CTPA had become the most common diagnostic test performed nationwide in patients diagnosed with PE.2 Most scans are interpreted as either positive or negative for pulmonary embolism, providing clinicians with a greater sense of diagnostic certainty than with the probabilistic results of lung scintigraphy. Initial studies of CTPA supported this appearance of diagnostic certainty, reporting sensitivity and specificity of greater than 90%,3, 4 but several subsequent studies have failed to reproduce these results.57 Newer multidetector CT scans are believed to be more accurate than earlier single‐detector CT,8 but true estimates of CTPA test characteristics will not be known until publication of the forthcoming PIOPED II study.9

Even without these data, CT‐based diagnostic algorithms have already appeared.1014 These algorithms generally focus on minimizing the false‐negative rate through use of serial testing (involving combinations of serum D‐dimer, lower‐extremity ultrasound, and CTPA). A recent meta‐analysis demonstrated that negative CTPA is highly accurate at ruling out PE, with test characteristics similar to conventional pulmonary angiography.15 Another meta‐analysis found that the 3‐month rate of subsequent venous thromboembolism after negative CTPA was 1.4% (95% CI 1.1%‐1.8%),16 supporting the strategy of withholding anticoagulants after negative CTPA in combination with other tests. However, use of serial testing to establish the diagnosis of PE and initiate anticoagulation has not been systematically evaluated or recommended, even for patients with a low pretest probability of PE.17

To assess the potential impact of these algorithms on the diagnosis of PE in clinical practice, we analyzed the clinical presentation and treatment of a cohort of patients at our institution who underwent CTPA for suspected PE.1 We calculated a range of posttest probabilities for pulmonary embolism for these patients, given the pretest probabilities, test results, and estimates of CTPA test characteristics. We then compared the treatment decisions of clinicians to the posttest probabilities of PE in order to establish the potential frequency of false‐positive and false‐negative diagnoses and to determine if patients were treated appropriately based on these estimates.

METHODS

Sites and Subjects

Details of the sites, subjects, and methods used to collect patient‐level data in this analysis have been previously published.1 The study was performed at Moffitt‐Long Hospital and San Francisco General Hospital, teaching hospitals affiliated with the University of California San Francisco School of Medicine. At both sites, single‐detector CT scans were available 24 hours a day throughout the study period and were read by attending radiologists who specialized in thoracic imaging. We excluded patients whose CTPA was not completed as the initial test in the evaluation of suspected PE, those who underwent testing for any indication other than suspected acute PE, and those with incomplete medical records or technically inadequate CTPA.

We randomly selected 345 patients who underwent CTPA between January 1, 1998, and December 31, 2000, from the Radiology Department databases. One investigator (R.L.T.) then abstracted charts of all patients. For each subject, we collected data about history and clinical presentation, diagnostic impressions of the treating clinicians, treatments administered both before and after diagnostic testing, CTPA result, results of other diagnostic tests for PE, and final clinical diagnosis. During the study period, there were no institution‐ or department‐specific guidelines or decision aids available for the diagnosis of PE. Ventilation‐perfusion scan, lower extremity ultrasound, and pulmonary angiography were available, but highly sensitive D‐dimer assays were not in use. The study was approved by the Institutional Review Boards of both sites, and requirement for written informed consent from patients was waived.

Estimates of Pretest Probabilities of Pulmonary Embolism and CTPA Test Characteristics

Several prediction rules1820 generate clinical pretest probabilities for patients with suspected PE. We used the Wells score18 to assign a pretest probability of low, moderate, or high to each patient on the basis of the following clinical variables: leg swelling, hemoptysis, tachycardia, history of recent immobilization, history of prior DVT or PE, active malignancy, and lack of a more likely alternative diagnosis. We chose this rule as (unlike other prediction rules such as the Geneva rule20) the Wells score has been validated for hospitalized patients with suspected PE and does not require arterial blood gas measurements. The prevalence of PE reported in the evaluation of the Wells score was 3.4%, 27.8%, and 78.3% for low, moderate, and high pretest probabilities, respectively.18

As in our previous study,1 we assumed CTPA to be 90% sensitive and 95% specific based on published estimates.3, 17 These values correspond to a positive likelihood ratio of 18 and a negative likelihood ratio of 0.1.21 We chose these values as a best‐case estimate of the test characteristics of CTPA, although other studies have found less impressive results.7 Using these pretest probabilities and likelihood ratios, we then used Bayes' theorem (Figure 1) to calculate the range of expected posttest probabilities of pulmonary embolism.

Figure 1
Bayes' theorem.

Calculation of Posttest Probabilities and Comparison to Treatment Outcomes

For each pretest probability category, we used the posttest probabilities calculated above to determine the number of true‐positive pulmonary emboli, as follows: We then compared treatment decisions made by clinicians at our hospital to the calculated posttest probabilities and number of true‐positive diagnoses of PE. We considered the difference between the number of patients treated for PE and the number of true‐positive diagnoses of PE to represent possible false‐positive diagnoses. In a similar fashion, we determined the number of likely true‐negative diagnoses of PE and considered the difference between the number of patients not treated for PE and the number of true‐negative diagnoses to represent possible false‐negative diagnoses.

RESULTS

Patient Characteristics

After excluding 23 patients receiving anticoagulants for other indications prior to CTPA, the study cohort included 322 patients (57.7% female), with an average age of 58.6 years, of whom 20.5% had cancer and 4.5% had a prior history of thromboembolic disease. Scans were primarily ordered by the medicine service (47.7% of cases) and emergency department (22.9%). CTPA was the initial test for 9% of patients evaluated for suspected acute PE during the first 6 months of the study period, increasing to 83% by the end of 2000.1 The overall pretest probability distribution remained the same throughout the entire study period.1

Test Results and Treatment Decisions

Most patients in our cohort had a low (n = 184, 57.1%) or a moderate (n = 101, 31.4%) pretest probability of PE (Table 1). The likelihood of a positive CTPA increased as the pretest probability increased, but even among patients with high clinical risk, only 35.1% had positive CT scans. In total, scans were positive in 57 patients and negative in 265 patients. Clinicians treated 55 patients with a positive CTPA (96.5%); none of these patients underwent additional testing for DVT or PE after the imaging study. Among patients with a negative CTPA, 254 (95.8%) were not treated; none of the patients in whom anticoagulation was withheld underwent further testing, whereas the other 11 patients were treated on the basis of other tests (5 high‐probability ventilation‐perfusion scans, 3 positive leg ultrasounds, and 3 for unclear reasons). Overall, 66 patients (20.5%) were treated for pulmonary embolism.

Study Results Stratified by Pretest Probability
Pretest probability of PE (number of CTPA performed)Low (N = 184)Moderate (N = 101)High (N = 37)Total (N = 322)
  • Low, moderate, and high pretest probabilities were determined using the Wells criteria.18 The probability of PE in each category was 3.4%, 27.8%, and 78.3%, respectively.

CTPA positive for PE (% of pretest probability group)22 (12.0%)22 (21.8%)13 (35.1%)57 (17.7%)
CTPA negative for PE (% of pretest probability group)162 (88.0%)79 (78.2%)24 (64.9%)265 (82.3%)
Patients with positive CT subsequently treated for PE (% of pretest probability group)21 (11.4%)21 (20.8%)13 (35.1%)55 (17.1%)
Patients treated for PE despite negative CT (% of pretest probability group)5 (2.7%)3 (3.0%)3 (8.1%)11 (3.4%)
Total patients treated for PE (% of pretest probability group)26 (14.1%)24 (23.8%)16 (43.2%)66 (20.5%)

Literature‐Derived Estimates of Posttest Probabilities of Pulmonary Embolism

Patients who have a low pretest probability of PE and a positive CTPA have a posttest probability of 41.6% under our estimate of CTPA test characteristics. Patients with moderate pretest probability have a posttest probability of 87.4% and patients with a high pretest probability will have a 98.5% probability of embolism with a positive scan. The traditional treatment threshold for PE is a posttest probability of 90%.22

Observed Versus Expected PE Rates and Subsequent Treatment

Only 9 of the 22 patients (41%) with a low pretest probability and a positive CTPA likely represent true‐positive emboli. However, clinicians chose to treat 21 of the 22 patients with this combination of pretest probability and imaging findings. Thus, 12 emboli would be considered possible false‐positive diagnoses. Similarly, in the moderate pretest probability group, 2 of 21 patients with moderate pretest probability and 0 of 13 patients with high pretest probability treated for PE had a possibly false‐positive diagnosis. Thus, in total, 25.4% (14 of 55) patients treated for PE had a possible false‐positive diagnosis of pulmonary embolism and may have been unnecessarily administered anticoagulants (Table 2). All patients who potentially had a false‐positive PE had either a low or moderate pretest probability of PE; in fact, the majority (57.1%) of patients with a low pretest probability of PE who were subsequently treated for PE likely had a false‐positive diagnosis.

Clinical Treatment Decisions Compared to Calculated Number of True‐Positive Pulmonary Emboli in Patients Treated for PE
 Pretest probability
Low (n = 184)Moderate (n = 101)High (n = 37)Total (n = 322)
  • The number of false‐positive pulmonary emboli in each group was determined by subtracting the calculated number of true‐positive evaluations from the number of patients who were treated in each group. The total number in each category was calculated as the sum of each pretest probability group.

CTPA positive for PE (% of pretest probability group)22 (12.0%)22 (21.8%)13 (35.1%)57 (17.7%)
Patients with positive CTPA treated for pulmonary embolism (n, % treated in risk group)21 (95.4%)21 (95.4%)13 (100%)55 (96.5%)
Calculated number and rate of probable true‐positive evaluations    
Number of true‐positive PE (n, % treated in risk group)9 (42.9%)19 (90.5%)13 (100%)41 (74.6%)
Calculated number and rate of possible false‐positive evaluations    
Number of possible false‐positive PE (n, % in risk group with unexpected PE)12 (58.1%)2 (9.5%)014 (25.4%)

Clinicians were more likely to overtreat a patient with a possible false‐positive CT scan than to withhold treatment from a patient with a possible false‐negative diagnosis. Using the same estimates of CTPA test characteristics, the incidence of possible false‐negative diagnosis of PE was 1.6% (4 possible false‐negative diagnoses among 254 patients with negative CTPA results who were not treated for PE.) All these patients had a high pretest probability of PE.

DISCUSSION

Physicians at our institution regarded CTPA results as definitive, anticoagulating 96.5% of patients with a positive CT and withholding treatment in 95.8% of patients with a negative scan. This practice pattern may result in unnecessary anticoagulation of many patients with a low pretest probability of PE who may have had false‐positive CTPA findings. In contrast, the rate of possible false‐negative diagnosis of PE was low, consistent with the results of several other studies.16

The use of CTPA is likely to increase because of the publication of multiple algorithms advocating that CTPA be the chief imaging study used in the diagnosis of PE.1014 These algorithms recommend serial testing on patients with a negative CTPA in order to minimize the false‐negative rate, but they do not require systematic follow‐up in patients with a positive scan, even if the pretest probability was low. In management trials, this approach resulted in a low false‐negative rate (1.0%‐1.8% at 3‐month follow‐up).1114 However, the rate of major bleeding in patients treated for PE was 3.2%‐6.0% at 3 months,1214 illustrating the potential risk of anticoagulating patients who may have false‐positive diagnoses. Furthermore, premature diagnostic closure after a CTPA positive for PE may result in additional morbidity as a result of missing the true diagnosis.

One potential explanation for the large number of potential false‐positive emboli seen in low‐risk patients is that it is difficult to accurately diagnose distal pulmonary emboli with CTPA. The interrater reliability of CTPA for diagnosis of subsegmental PE is suboptimal,23 and the clinical significance of these emboli remains uncertain.24 Thus, many emboli found in patients with low pretest probability actually may have been subsegmental PE that would not have been diagnosed by another radiologist. As CTPA is more accurate for diagnosing central PE,25 clinicians should consider reviewing positive scans with the interpreting radiologist, especially when the pretest probability was low and the filling defects identified are in distal vessels.

Our results may also illustrate that clinicians have a lower treatment threshold when presented with apparently definitive evidence of pulmonary embolism. Previous proposals on the appropriate treatment threshold for PE, which used Bayesian decision‐making methods similar to ours,22 incorporated PIOPED26 data on the pretest probability of pulmonary embolism, the test characteristics of ventilation‐perfusion scans, and the clinical outcomes of patients in each test result/pretest probability category. However, there is no corresponding data for CTPA, as its test characteristics are still uncertain, and long‐term clinical outcomes have not been documented for patients treated (or not treated) on the basis of CT results.

Our study had several limitations. First, charting bias potentially was introduced by our using a retrospective method of collecting data for calculating pretest probabilities. To address this potential bias, we collected data from the entire medical record, including information available at and preceding the time of the CT scan. We believe this method was effective, as the range of pretest probabilities and the prevalence of PE in our study were very similar to those seen in a number of prospective studies.1820, 26, 27 Although other risk indices exist, the Wells score has been shown to have predictive powers equal to other algorithms and to clinicians; implicit assessments.28, 29 In our cohort, 35.1% of patients with a high pretest probability were diagnosed with PE; although this was lower than that in the initial Wells cohort,18 it was very similar to a subsequent validation study using the Wells algorithm, in which the prevalence of PE in patients with high pretest probability was 37.5%.27 Plasma D‐dimer testing is not routinely used at our hospitals, but it is a component of some CTPA‐based diagnostic algorithms.1114 Although use of D‐dimer testing may have led to fewer scans in patients with negative D‐dimer test results and low pretest probability,30 the high false‐positive rate for D‐dimer assays31 makes it difficult to predict the effect of widespread D‐dimer use on the overall pretest probability distribution. Using our assumptions about CT test characteristics, a pretest probability of more than 30% is required to generate a posttest probability of PE of at least 90% (the traditional treatment threshold for anticoagulant therapy22) with a positive scan. Extensive D‐dimer use would be unlikely to cause such a shift in the distribution of pretest probabilities.

Finally, CT technology has continued to advance, and many institutions now use 64‐slice scanners32 in contrast to the single‐slice scanners in use at the time our data were collected. Our assumptions were that CTPA has a positive likelihood ratio of 18.0 and a negative likelihood ratio of 0.1 (corresponding to a sensitivity of 90% and a specificity of 95%), although many studies of single‐detector CTPA found less impressive values.5, 7 Multidetector CT is thought to be more accurate than was earlier technology, but the true diagnostic performance of multidetector CT is not yet known. However, our findings pertain primarily to clinicians' responses to test results, so even if newer scanners are more accurate, Bayesian analysis will still be required in order to appropriately treat patients. A recent meta‐analysis of diagnostic strategies for PE found CTPA to have a positive likelihood ratio of 24.1, but even using this higher value, patients with a low pretest probability and positive CTPA still have a posttest probability of PE below the traditional treatment threshold.33 As most patients undergoing evaluation for suspected PE have a low pretest probability,17 a substantial number of false‐positive diagnoses of PE may still occur, even with a more accurate diagnostic test.

CT pulmonary angiography has become the first‐line test for pulmonary embolism at our institution, a situation likely mirrored elsewhere. CTPA is safe and rapid and offers the advantage of revealing ancillary lung findings that may be clinically significant.12 Although the test is an important addition to a clinician's diagnostic armamentarium, Bayesian analysis must be used to interpret its results, especially when CTPA is used as the first‐line diagnostic test. Our data raise the troubling concern that reliance on CTPA as the sole diagnostic test for suspected pulmonary embolism may result in a large number of patients with false‐positive CT scans receiving anticoagulation treatment.

References
  1. Trowbridge RL,Araoz PA,Gotway M,Bailey R,Auerbach AD.The impact of helical computed tomography on diagnostic and treatment strategies in patients with suspected pulmonary embolism.Am J Med.2004;116:8490.
  2. Stein PD,Kayali F,Olson RE.Trends in the use of diagnostic imaging in patients hospitalized with acute pulmonary embolism.Am J Cardiol.2004;93:13161317.
  3. Remy‐Jardin M,Remy J,Wattinne L,Giraud F.Central pulmonary thromboembolism: diagnosis with spiral volumetric CT with the single‐breath‐old technique—comparison with pulmonary angiography.Radiology.1992;185:381387.
  4. van Rossum AB,Pattynama PM,Ton ER, et al.Pulmonary embolism: validation of spiral CT angiography in 149 patients.Radiology.1996;201:467470.
  5. van Beek EJ,Brouwers EM,Song B,Bongaerts AH,Oudkerk M.Lung scintigraphy and helical computed tomography for the diagnosis of pulmonary embolism: a meta‐analysis.Clin Appl Thromb Hemost.2001;7(2):8792.
  6. Mullins MD,Becker DM,Hagspiel KD,Philbrick JT.The role of spiral volumetric computed tomography in the diagnosis of pulmonary embolism.Arch Intern Med.2000;160(3):293298.
  7. Rathbun SW,Raskob GE,Whitsett TL.Sensitivity and specificity of helical computed tomography in the diagnosis of pulmonary embolism: a systematic review.Ann Intern Med.2000;132(3):227232.
  8. Winer‐Muram HT,Rydberg J,Johnson MS, et al.Suspected acute pulmonary embolism: evaluation with multi‐detector row CT versus digital subtraction pulmonary arteriography.Radiology.2004;233:806815.
  9. Gottschalk A,Stein PD,Goodman LR,Sostman HD.Overview of Prospective Investigation of Pulmonary Embolism Diagnosis II.Semin Nucl Med.2002;32(3):173182.
  10. Ghanima W,Almaas V,Aballi S, et al.Management of suspected pulmonary embolism (PE) by D‐dimer and multi‐slice computed tomography in outpatients: an outcome study.J Thromb Haemost.2005;3:19261932.
  11. Perrier A,Roy PM,Sanchez O, et al.Multidetector‐row computed tomography in suspected pulmonary embolism.N Engl J Med.2005;352:17601768.
  12. van Strijen MJ,de Monye W,Schiereck J, et al.Single‐detector helical computed tomography as the primary diagnostic test in suspected pulmonary embolism: a multicenter clinical management study of 510 patients.Ann Intern Med.2003;138:307314.
  13. Musset D,Parent F,Meyer G, et al.Diagnostic strategy for patients with suspected pulmonary embolism: a prospective multicentre outcome study.Lancet.2002;260:19141920.
  14. Perrier A,Roy PM,Aujesky D, et al.Diagnosing pulmonary embolism in outpatients with clinical assessment, D‐dimer measurement, venous ultrasound, and helical computed tomography: a multicenter management study.Am J Med.2004;116:291299.
  15. Quiroz R,Kucher N,Zou KH, et al.Clinical validity of a negative computed tomography scan in patients with suspected pulmonary embolism: a systematic review.JAMA.2005;293:20122017.
  16. Moores LK,Jackson WL,Shorr AF,Jackson JL.Meta‐analysis: outcomes in patients with suspected pulmonary embolism managed with computed tomographic pulmonary angiography.Ann Intern Med.2004;141:866874.
  17. Fedullo PF,Tapson VF.Clinical Practice: The evaluation of suspected pulmonary embolism.N Engl J Med.2003;349:12471256.
  18. Wells PS,Ginsberg JS,Anderson DR, et al.Use of a clinical model for safe management of patients with suspected pulmonary embolism.Ann Intern Med.1998;129:9971005.
  19. Miniati M,Monti S,Bottai M.A structured clinical model for predicting the probability of pulmonary embolism.Am J Med.2003;114(3):173179.
  20. Wicki J,Perneger TV,Junod AF,Bounameaux H,Perrier A.Assessing clinical probability of pulmonary embolism in the emergency ward: a simple score.Arch Intern Med.2001;161(1):9297.
  21. Black E,Bordley D,Tape T,Panzer R.Interpretation of diagnostic tests and strategies for their use in quantitative decision making. In:Diagnostic strategies for common medical problems.Philadelphia, PA:American College of Physicians,1999.
  22. Stein PD,Hull RD,Saltzman HA,Pineo G.Strategy for diagnosis of patients with suspected acute pulmonary embolism.Chest.1993;103:15531559.
  23. Ruiz Y,Caballero P,Caniego JL, et al.Prospective comparison of helical CT with angiography in pulmonary embolism: global and selective vascular territory analysis. Interobserver agreement.Eur Radiol.2003;13:823829.
  24. Stein PD,Henry JW.Prevalence of acute pulmonary embolism among patients in a general hospital and at autopsy.Chest.1995;108:978981.
  25. Perrier A,Howarth N,Didier D, et al.Performance of helical computed tomography in unselected outpatients with suspected pulmonary embolism.Ann Intern Med.2001;135(2):8897.
  26. Value of the ventilation/perfusion scan in acute pulmonary embolism. Results of the prospective investigation of pulmonary embolism diagnosis (PIOPED).The PIOPED Investigators.JAMA.1990;263:27532759.
  27. Wells PS,Anderson DR,Rodger M, et al.Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and d‐dimer.Ann Intern Med.2001;135(2):98107.
  28. Chagnon I,Bounameaux H,Aujesky D, et al.Comparison of two clinical prediction rules and implicit assessment among patients with suspected pulmonary embolism.Am J Med.2002;113(4):269275.
  29. Chunilal SD,Eikelboom JW,Attia J, et al.Does this patient have pulmonary embolism?JAMA.2003;290:28492858.
  30. Kruip MJ,Leclercq MG,van der Heul C,Prins MH,Buller HR.Diagnostic strategies for excluding pulmonary embolism in clinical outcome studies. A systematic review.Ann Intern Med.2003;138:941951.
  31. Stein PD,Hull RD,Patel KC, et al.D‐dimer for the exclusion of acute venous thrombosis and pulmonary embolism: a systematic review.Ann Intern Med.2004;140:589602.
  32. Goldhaber SZ.Multislice computed tomography for pulmonary embolism—a technological marvel.N Engl J Med2005;352(17):18124.
  33. Roy PM,Colombet I,Durieux P,Chatellier G,Sors H,Meyer G.Systematic review and meta‐analysis of strategies for the diagnosis of suspected pulmonary embolism.Br Med J.2005;331:259.
Article PDF
Issue
Journal of Hospital Medicine - 1(2)
Publications
Page Number
81-87
Legacy Keywords
pulmonary embolism, CT pulmonary angiography, Bayes' theorem, diagnosis
Sections
Article PDF
Article PDF

Spiral computed tomographic pulmonary angiography (CTPA) is a common first‐line test for the evaluation of suspected pulmonary embolism (PE). At our institution CTPA became the initial diagnostic study in 83% of patients with suspected PE within 3 years of the introduction of CT,1 and by 2001 CTPA had become the most common diagnostic test performed nationwide in patients diagnosed with PE.2 Most scans are interpreted as either positive or negative for pulmonary embolism, providing clinicians with a greater sense of diagnostic certainty than with the probabilistic results of lung scintigraphy. Initial studies of CTPA supported this appearance of diagnostic certainty, reporting sensitivity and specificity of greater than 90%,3, 4 but several subsequent studies have failed to reproduce these results.57 Newer multidetector CT scans are believed to be more accurate than earlier single‐detector CT,8 but true estimates of CTPA test characteristics will not be known until publication of the forthcoming PIOPED II study.9

Even without these data, CT‐based diagnostic algorithms have already appeared.1014 These algorithms generally focus on minimizing the false‐negative rate through use of serial testing (involving combinations of serum D‐dimer, lower‐extremity ultrasound, and CTPA). A recent meta‐analysis demonstrated that negative CTPA is highly accurate at ruling out PE, with test characteristics similar to conventional pulmonary angiography.15 Another meta‐analysis found that the 3‐month rate of subsequent venous thromboembolism after negative CTPA was 1.4% (95% CI 1.1%‐1.8%),16 supporting the strategy of withholding anticoagulants after negative CTPA in combination with other tests. However, use of serial testing to establish the diagnosis of PE and initiate anticoagulation has not been systematically evaluated or recommended, even for patients with a low pretest probability of PE.17

To assess the potential impact of these algorithms on the diagnosis of PE in clinical practice, we analyzed the clinical presentation and treatment of a cohort of patients at our institution who underwent CTPA for suspected PE.1 We calculated a range of posttest probabilities for pulmonary embolism for these patients, given the pretest probabilities, test results, and estimates of CTPA test characteristics. We then compared the treatment decisions of clinicians to the posttest probabilities of PE in order to establish the potential frequency of false‐positive and false‐negative diagnoses and to determine if patients were treated appropriately based on these estimates.

METHODS

Sites and Subjects

Details of the sites, subjects, and methods used to collect patient‐level data in this analysis have been previously published.1 The study was performed at Moffitt‐Long Hospital and San Francisco General Hospital, teaching hospitals affiliated with the University of California San Francisco School of Medicine. At both sites, single‐detector CT scans were available 24 hours a day throughout the study period and were read by attending radiologists who specialized in thoracic imaging. We excluded patients whose CTPA was not completed as the initial test in the evaluation of suspected PE, those who underwent testing for any indication other than suspected acute PE, and those with incomplete medical records or technically inadequate CTPA.

We randomly selected 345 patients who underwent CTPA between January 1, 1998, and December 31, 2000, from the Radiology Department databases. One investigator (R.L.T.) then abstracted charts of all patients. For each subject, we collected data about history and clinical presentation, diagnostic impressions of the treating clinicians, treatments administered both before and after diagnostic testing, CTPA result, results of other diagnostic tests for PE, and final clinical diagnosis. During the study period, there were no institution‐ or department‐specific guidelines or decision aids available for the diagnosis of PE. Ventilation‐perfusion scan, lower extremity ultrasound, and pulmonary angiography were available, but highly sensitive D‐dimer assays were not in use. The study was approved by the Institutional Review Boards of both sites, and requirement for written informed consent from patients was waived.

Estimates of Pretest Probabilities of Pulmonary Embolism and CTPA Test Characteristics

Several prediction rules1820 generate clinical pretest probabilities for patients with suspected PE. We used the Wells score18 to assign a pretest probability of low, moderate, or high to each patient on the basis of the following clinical variables: leg swelling, hemoptysis, tachycardia, history of recent immobilization, history of prior DVT or PE, active malignancy, and lack of a more likely alternative diagnosis. We chose this rule as (unlike other prediction rules such as the Geneva rule20) the Wells score has been validated for hospitalized patients with suspected PE and does not require arterial blood gas measurements. The prevalence of PE reported in the evaluation of the Wells score was 3.4%, 27.8%, and 78.3% for low, moderate, and high pretest probabilities, respectively.18

As in our previous study,1 we assumed CTPA to be 90% sensitive and 95% specific based on published estimates.3, 17 These values correspond to a positive likelihood ratio of 18 and a negative likelihood ratio of 0.1.21 We chose these values as a best‐case estimate of the test characteristics of CTPA, although other studies have found less impressive results.7 Using these pretest probabilities and likelihood ratios, we then used Bayes' theorem (Figure 1) to calculate the range of expected posttest probabilities of pulmonary embolism.

Figure 1
Bayes' theorem.

Calculation of Posttest Probabilities and Comparison to Treatment Outcomes

For each pretest probability category, we used the posttest probabilities calculated above to determine the number of true‐positive pulmonary emboli, as follows: We then compared treatment decisions made by clinicians at our hospital to the calculated posttest probabilities and number of true‐positive diagnoses of PE. We considered the difference between the number of patients treated for PE and the number of true‐positive diagnoses of PE to represent possible false‐positive diagnoses. In a similar fashion, we determined the number of likely true‐negative diagnoses of PE and considered the difference between the number of patients not treated for PE and the number of true‐negative diagnoses to represent possible false‐negative diagnoses.

RESULTS

Patient Characteristics

After excluding 23 patients receiving anticoagulants for other indications prior to CTPA, the study cohort included 322 patients (57.7% female), with an average age of 58.6 years, of whom 20.5% had cancer and 4.5% had a prior history of thromboembolic disease. Scans were primarily ordered by the medicine service (47.7% of cases) and emergency department (22.9%). CTPA was the initial test for 9% of patients evaluated for suspected acute PE during the first 6 months of the study period, increasing to 83% by the end of 2000.1 The overall pretest probability distribution remained the same throughout the entire study period.1

Test Results and Treatment Decisions

Most patients in our cohort had a low (n = 184, 57.1%) or a moderate (n = 101, 31.4%) pretest probability of PE (Table 1). The likelihood of a positive CTPA increased as the pretest probability increased, but even among patients with high clinical risk, only 35.1% had positive CT scans. In total, scans were positive in 57 patients and negative in 265 patients. Clinicians treated 55 patients with a positive CTPA (96.5%); none of these patients underwent additional testing for DVT or PE after the imaging study. Among patients with a negative CTPA, 254 (95.8%) were not treated; none of the patients in whom anticoagulation was withheld underwent further testing, whereas the other 11 patients were treated on the basis of other tests (5 high‐probability ventilation‐perfusion scans, 3 positive leg ultrasounds, and 3 for unclear reasons). Overall, 66 patients (20.5%) were treated for pulmonary embolism.

Study Results Stratified by Pretest Probability
Pretest probability of PE (number of CTPA performed)Low (N = 184)Moderate (N = 101)High (N = 37)Total (N = 322)
  • Low, moderate, and high pretest probabilities were determined using the Wells criteria.18 The probability of PE in each category was 3.4%, 27.8%, and 78.3%, respectively.

CTPA positive for PE (% of pretest probability group)22 (12.0%)22 (21.8%)13 (35.1%)57 (17.7%)
CTPA negative for PE (% of pretest probability group)162 (88.0%)79 (78.2%)24 (64.9%)265 (82.3%)
Patients with positive CT subsequently treated for PE (% of pretest probability group)21 (11.4%)21 (20.8%)13 (35.1%)55 (17.1%)
Patients treated for PE despite negative CT (% of pretest probability group)5 (2.7%)3 (3.0%)3 (8.1%)11 (3.4%)
Total patients treated for PE (% of pretest probability group)26 (14.1%)24 (23.8%)16 (43.2%)66 (20.5%)

Literature‐Derived Estimates of Posttest Probabilities of Pulmonary Embolism

Patients who have a low pretest probability of PE and a positive CTPA have a posttest probability of 41.6% under our estimate of CTPA test characteristics. Patients with moderate pretest probability have a posttest probability of 87.4% and patients with a high pretest probability will have a 98.5% probability of embolism with a positive scan. The traditional treatment threshold for PE is a posttest probability of 90%.22

Observed Versus Expected PE Rates and Subsequent Treatment

Only 9 of the 22 patients (41%) with a low pretest probability and a positive CTPA likely represent true‐positive emboli. However, clinicians chose to treat 21 of the 22 patients with this combination of pretest probability and imaging findings. Thus, 12 emboli would be considered possible false‐positive diagnoses. Similarly, in the moderate pretest probability group, 2 of 21 patients with moderate pretest probability and 0 of 13 patients with high pretest probability treated for PE had a possibly false‐positive diagnosis. Thus, in total, 25.4% (14 of 55) patients treated for PE had a possible false‐positive diagnosis of pulmonary embolism and may have been unnecessarily administered anticoagulants (Table 2). All patients who potentially had a false‐positive PE had either a low or moderate pretest probability of PE; in fact, the majority (57.1%) of patients with a low pretest probability of PE who were subsequently treated for PE likely had a false‐positive diagnosis.

Clinical Treatment Decisions Compared to Calculated Number of True‐Positive Pulmonary Emboli in Patients Treated for PE
 Pretest probability
Low (n = 184)Moderate (n = 101)High (n = 37)Total (n = 322)
  • The number of false‐positive pulmonary emboli in each group was determined by subtracting the calculated number of true‐positive evaluations from the number of patients who were treated in each group. The total number in each category was calculated as the sum of each pretest probability group.

CTPA positive for PE (% of pretest probability group)22 (12.0%)22 (21.8%)13 (35.1%)57 (17.7%)
Patients with positive CTPA treated for pulmonary embolism (n, % treated in risk group)21 (95.4%)21 (95.4%)13 (100%)55 (96.5%)
Calculated number and rate of probable true‐positive evaluations    
Number of true‐positive PE (n, % treated in risk group)9 (42.9%)19 (90.5%)13 (100%)41 (74.6%)
Calculated number and rate of possible false‐positive evaluations    
Number of possible false‐positive PE (n, % in risk group with unexpected PE)12 (58.1%)2 (9.5%)014 (25.4%)

Clinicians were more likely to overtreat a patient with a possible false‐positive CT scan than to withhold treatment from a patient with a possible false‐negative diagnosis. Using the same estimates of CTPA test characteristics, the incidence of possible false‐negative diagnosis of PE was 1.6% (4 possible false‐negative diagnoses among 254 patients with negative CTPA results who were not treated for PE.) All these patients had a high pretest probability of PE.

DISCUSSION

Physicians at our institution regarded CTPA results as definitive, anticoagulating 96.5% of patients with a positive CT and withholding treatment in 95.8% of patients with a negative scan. This practice pattern may result in unnecessary anticoagulation of many patients with a low pretest probability of PE who may have had false‐positive CTPA findings. In contrast, the rate of possible false‐negative diagnosis of PE was low, consistent with the results of several other studies.16

The use of CTPA is likely to increase because of the publication of multiple algorithms advocating that CTPA be the chief imaging study used in the diagnosis of PE.1014 These algorithms recommend serial testing on patients with a negative CTPA in order to minimize the false‐negative rate, but they do not require systematic follow‐up in patients with a positive scan, even if the pretest probability was low. In management trials, this approach resulted in a low false‐negative rate (1.0%‐1.8% at 3‐month follow‐up).1114 However, the rate of major bleeding in patients treated for PE was 3.2%‐6.0% at 3 months,1214 illustrating the potential risk of anticoagulating patients who may have false‐positive diagnoses. Furthermore, premature diagnostic closure after a CTPA positive for PE may result in additional morbidity as a result of missing the true diagnosis.

One potential explanation for the large number of potential false‐positive emboli seen in low‐risk patients is that it is difficult to accurately diagnose distal pulmonary emboli with CTPA. The interrater reliability of CTPA for diagnosis of subsegmental PE is suboptimal,23 and the clinical significance of these emboli remains uncertain.24 Thus, many emboli found in patients with low pretest probability actually may have been subsegmental PE that would not have been diagnosed by another radiologist. As CTPA is more accurate for diagnosing central PE,25 clinicians should consider reviewing positive scans with the interpreting radiologist, especially when the pretest probability was low and the filling defects identified are in distal vessels.

Our results may also illustrate that clinicians have a lower treatment threshold when presented with apparently definitive evidence of pulmonary embolism. Previous proposals on the appropriate treatment threshold for PE, which used Bayesian decision‐making methods similar to ours,22 incorporated PIOPED26 data on the pretest probability of pulmonary embolism, the test characteristics of ventilation‐perfusion scans, and the clinical outcomes of patients in each test result/pretest probability category. However, there is no corresponding data for CTPA, as its test characteristics are still uncertain, and long‐term clinical outcomes have not been documented for patients treated (or not treated) on the basis of CT results.

Our study had several limitations. First, charting bias potentially was introduced by our using a retrospective method of collecting data for calculating pretest probabilities. To address this potential bias, we collected data from the entire medical record, including information available at and preceding the time of the CT scan. We believe this method was effective, as the range of pretest probabilities and the prevalence of PE in our study were very similar to those seen in a number of prospective studies.1820, 26, 27 Although other risk indices exist, the Wells score has been shown to have predictive powers equal to other algorithms and to clinicians; implicit assessments.28, 29 In our cohort, 35.1% of patients with a high pretest probability were diagnosed with PE; although this was lower than that in the initial Wells cohort,18 it was very similar to a subsequent validation study using the Wells algorithm, in which the prevalence of PE in patients with high pretest probability was 37.5%.27 Plasma D‐dimer testing is not routinely used at our hospitals, but it is a component of some CTPA‐based diagnostic algorithms.1114 Although use of D‐dimer testing may have led to fewer scans in patients with negative D‐dimer test results and low pretest probability,30 the high false‐positive rate for D‐dimer assays31 makes it difficult to predict the effect of widespread D‐dimer use on the overall pretest probability distribution. Using our assumptions about CT test characteristics, a pretest probability of more than 30% is required to generate a posttest probability of PE of at least 90% (the traditional treatment threshold for anticoagulant therapy22) with a positive scan. Extensive D‐dimer use would be unlikely to cause such a shift in the distribution of pretest probabilities.

Finally, CT technology has continued to advance, and many institutions now use 64‐slice scanners32 in contrast to the single‐slice scanners in use at the time our data were collected. Our assumptions were that CTPA has a positive likelihood ratio of 18.0 and a negative likelihood ratio of 0.1 (corresponding to a sensitivity of 90% and a specificity of 95%), although many studies of single‐detector CTPA found less impressive values.5, 7 Multidetector CT is thought to be more accurate than was earlier technology, but the true diagnostic performance of multidetector CT is not yet known. However, our findings pertain primarily to clinicians' responses to test results, so even if newer scanners are more accurate, Bayesian analysis will still be required in order to appropriately treat patients. A recent meta‐analysis of diagnostic strategies for PE found CTPA to have a positive likelihood ratio of 24.1, but even using this higher value, patients with a low pretest probability and positive CTPA still have a posttest probability of PE below the traditional treatment threshold.33 As most patients undergoing evaluation for suspected PE have a low pretest probability,17 a substantial number of false‐positive diagnoses of PE may still occur, even with a more accurate diagnostic test.

CT pulmonary angiography has become the first‐line test for pulmonary embolism at our institution, a situation likely mirrored elsewhere. CTPA is safe and rapid and offers the advantage of revealing ancillary lung findings that may be clinically significant.12 Although the test is an important addition to a clinician's diagnostic armamentarium, Bayesian analysis must be used to interpret its results, especially when CTPA is used as the first‐line diagnostic test. Our data raise the troubling concern that reliance on CTPA as the sole diagnostic test for suspected pulmonary embolism may result in a large number of patients with false‐positive CT scans receiving anticoagulation treatment.

Spiral computed tomographic pulmonary angiography (CTPA) is a common first‐line test for the evaluation of suspected pulmonary embolism (PE). At our institution CTPA became the initial diagnostic study in 83% of patients with suspected PE within 3 years of the introduction of CT,1 and by 2001 CTPA had become the most common diagnostic test performed nationwide in patients diagnosed with PE.2 Most scans are interpreted as either positive or negative for pulmonary embolism, providing clinicians with a greater sense of diagnostic certainty than with the probabilistic results of lung scintigraphy. Initial studies of CTPA supported this appearance of diagnostic certainty, reporting sensitivity and specificity of greater than 90%,3, 4 but several subsequent studies have failed to reproduce these results.57 Newer multidetector CT scans are believed to be more accurate than earlier single‐detector CT,8 but true estimates of CTPA test characteristics will not be known until publication of the forthcoming PIOPED II study.9

Even without these data, CT‐based diagnostic algorithms have already appeared.1014 These algorithms generally focus on minimizing the false‐negative rate through use of serial testing (involving combinations of serum D‐dimer, lower‐extremity ultrasound, and CTPA). A recent meta‐analysis demonstrated that negative CTPA is highly accurate at ruling out PE, with test characteristics similar to conventional pulmonary angiography.15 Another meta‐analysis found that the 3‐month rate of subsequent venous thromboembolism after negative CTPA was 1.4% (95% CI 1.1%‐1.8%),16 supporting the strategy of withholding anticoagulants after negative CTPA in combination with other tests. However, use of serial testing to establish the diagnosis of PE and initiate anticoagulation has not been systematically evaluated or recommended, even for patients with a low pretest probability of PE.17

To assess the potential impact of these algorithms on the diagnosis of PE in clinical practice, we analyzed the clinical presentation and treatment of a cohort of patients at our institution who underwent CTPA for suspected PE.1 We calculated a range of posttest probabilities for pulmonary embolism for these patients, given the pretest probabilities, test results, and estimates of CTPA test characteristics. We then compared the treatment decisions of clinicians to the posttest probabilities of PE in order to establish the potential frequency of false‐positive and false‐negative diagnoses and to determine if patients were treated appropriately based on these estimates.

METHODS

Sites and Subjects

Details of the sites, subjects, and methods used to collect patient‐level data in this analysis have been previously published.1 The study was performed at Moffitt‐Long Hospital and San Francisco General Hospital, teaching hospitals affiliated with the University of California San Francisco School of Medicine. At both sites, single‐detector CT scans were available 24 hours a day throughout the study period and were read by attending radiologists who specialized in thoracic imaging. We excluded patients whose CTPA was not completed as the initial test in the evaluation of suspected PE, those who underwent testing for any indication other than suspected acute PE, and those with incomplete medical records or technically inadequate CTPA.

We randomly selected 345 patients who underwent CTPA between January 1, 1998, and December 31, 2000, from the Radiology Department databases. One investigator (R.L.T.) then abstracted charts of all patients. For each subject, we collected data about history and clinical presentation, diagnostic impressions of the treating clinicians, treatments administered both before and after diagnostic testing, CTPA result, results of other diagnostic tests for PE, and final clinical diagnosis. During the study period, there were no institution‐ or department‐specific guidelines or decision aids available for the diagnosis of PE. Ventilation‐perfusion scan, lower extremity ultrasound, and pulmonary angiography were available, but highly sensitive D‐dimer assays were not in use. The study was approved by the Institutional Review Boards of both sites, and requirement for written informed consent from patients was waived.

Estimates of Pretest Probabilities of Pulmonary Embolism and CTPA Test Characteristics

Several prediction rules1820 generate clinical pretest probabilities for patients with suspected PE. We used the Wells score18 to assign a pretest probability of low, moderate, or high to each patient on the basis of the following clinical variables: leg swelling, hemoptysis, tachycardia, history of recent immobilization, history of prior DVT or PE, active malignancy, and lack of a more likely alternative diagnosis. We chose this rule as (unlike other prediction rules such as the Geneva rule20) the Wells score has been validated for hospitalized patients with suspected PE and does not require arterial blood gas measurements. The prevalence of PE reported in the evaluation of the Wells score was 3.4%, 27.8%, and 78.3% for low, moderate, and high pretest probabilities, respectively.18

As in our previous study,1 we assumed CTPA to be 90% sensitive and 95% specific based on published estimates.3, 17 These values correspond to a positive likelihood ratio of 18 and a negative likelihood ratio of 0.1.21 We chose these values as a best‐case estimate of the test characteristics of CTPA, although other studies have found less impressive results.7 Using these pretest probabilities and likelihood ratios, we then used Bayes' theorem (Figure 1) to calculate the range of expected posttest probabilities of pulmonary embolism.

Figure 1
Bayes' theorem.

Calculation of Posttest Probabilities and Comparison to Treatment Outcomes

For each pretest probability category, we used the posttest probabilities calculated above to determine the number of true‐positive pulmonary emboli, as follows: We then compared treatment decisions made by clinicians at our hospital to the calculated posttest probabilities and number of true‐positive diagnoses of PE. We considered the difference between the number of patients treated for PE and the number of true‐positive diagnoses of PE to represent possible false‐positive diagnoses. In a similar fashion, we determined the number of likely true‐negative diagnoses of PE and considered the difference between the number of patients not treated for PE and the number of true‐negative diagnoses to represent possible false‐negative diagnoses.

RESULTS

Patient Characteristics

After excluding 23 patients receiving anticoagulants for other indications prior to CTPA, the study cohort included 322 patients (57.7% female), with an average age of 58.6 years, of whom 20.5% had cancer and 4.5% had a prior history of thromboembolic disease. Scans were primarily ordered by the medicine service (47.7% of cases) and emergency department (22.9%). CTPA was the initial test for 9% of patients evaluated for suspected acute PE during the first 6 months of the study period, increasing to 83% by the end of 2000.1 The overall pretest probability distribution remained the same throughout the entire study period.1

Test Results and Treatment Decisions

Most patients in our cohort had a low (n = 184, 57.1%) or a moderate (n = 101, 31.4%) pretest probability of PE (Table 1). The likelihood of a positive CTPA increased as the pretest probability increased, but even among patients with high clinical risk, only 35.1% had positive CT scans. In total, scans were positive in 57 patients and negative in 265 patients. Clinicians treated 55 patients with a positive CTPA (96.5%); none of these patients underwent additional testing for DVT or PE after the imaging study. Among patients with a negative CTPA, 254 (95.8%) were not treated; none of the patients in whom anticoagulation was withheld underwent further testing, whereas the other 11 patients were treated on the basis of other tests (5 high‐probability ventilation‐perfusion scans, 3 positive leg ultrasounds, and 3 for unclear reasons). Overall, 66 patients (20.5%) were treated for pulmonary embolism.

Study Results Stratified by Pretest Probability
Pretest probability of PE (number of CTPA performed)Low (N = 184)Moderate (N = 101)High (N = 37)Total (N = 322)
  • Low, moderate, and high pretest probabilities were determined using the Wells criteria.18 The probability of PE in each category was 3.4%, 27.8%, and 78.3%, respectively.

CTPA positive for PE (% of pretest probability group)22 (12.0%)22 (21.8%)13 (35.1%)57 (17.7%)
CTPA negative for PE (% of pretest probability group)162 (88.0%)79 (78.2%)24 (64.9%)265 (82.3%)
Patients with positive CT subsequently treated for PE (% of pretest probability group)21 (11.4%)21 (20.8%)13 (35.1%)55 (17.1%)
Patients treated for PE despite negative CT (% of pretest probability group)5 (2.7%)3 (3.0%)3 (8.1%)11 (3.4%)
Total patients treated for PE (% of pretest probability group)26 (14.1%)24 (23.8%)16 (43.2%)66 (20.5%)

Literature‐Derived Estimates of Posttest Probabilities of Pulmonary Embolism

Patients who have a low pretest probability of PE and a positive CTPA have a posttest probability of 41.6% under our estimate of CTPA test characteristics. Patients with moderate pretest probability have a posttest probability of 87.4% and patients with a high pretest probability will have a 98.5% probability of embolism with a positive scan. The traditional treatment threshold for PE is a posttest probability of 90%.22

Observed Versus Expected PE Rates and Subsequent Treatment

Only 9 of the 22 patients (41%) with a low pretest probability and a positive CTPA likely represent true‐positive emboli. However, clinicians chose to treat 21 of the 22 patients with this combination of pretest probability and imaging findings. Thus, 12 emboli would be considered possible false‐positive diagnoses. Similarly, in the moderate pretest probability group, 2 of 21 patients with moderate pretest probability and 0 of 13 patients with high pretest probability treated for PE had a possibly false‐positive diagnosis. Thus, in total, 25.4% (14 of 55) patients treated for PE had a possible false‐positive diagnosis of pulmonary embolism and may have been unnecessarily administered anticoagulants (Table 2). All patients who potentially had a false‐positive PE had either a low or moderate pretest probability of PE; in fact, the majority (57.1%) of patients with a low pretest probability of PE who were subsequently treated for PE likely had a false‐positive diagnosis.

Clinical Treatment Decisions Compared to Calculated Number of True‐Positive Pulmonary Emboli in Patients Treated for PE
 Pretest probability
Low (n = 184)Moderate (n = 101)High (n = 37)Total (n = 322)
  • The number of false‐positive pulmonary emboli in each group was determined by subtracting the calculated number of true‐positive evaluations from the number of patients who were treated in each group. The total number in each category was calculated as the sum of each pretest probability group.

CTPA positive for PE (% of pretest probability group)22 (12.0%)22 (21.8%)13 (35.1%)57 (17.7%)
Patients with positive CTPA treated for pulmonary embolism (n, % treated in risk group)21 (95.4%)21 (95.4%)13 (100%)55 (96.5%)
Calculated number and rate of probable true‐positive evaluations    
Number of true‐positive PE (n, % treated in risk group)9 (42.9%)19 (90.5%)13 (100%)41 (74.6%)
Calculated number and rate of possible false‐positive evaluations    
Number of possible false‐positive PE (n, % in risk group with unexpected PE)12 (58.1%)2 (9.5%)014 (25.4%)

Clinicians were more likely to overtreat a patient with a possible false‐positive CT scan than to withhold treatment from a patient with a possible false‐negative diagnosis. Using the same estimates of CTPA test characteristics, the incidence of possible false‐negative diagnosis of PE was 1.6% (4 possible false‐negative diagnoses among 254 patients with negative CTPA results who were not treated for PE.) All these patients had a high pretest probability of PE.

DISCUSSION

Physicians at our institution regarded CTPA results as definitive, anticoagulating 96.5% of patients with a positive CT and withholding treatment in 95.8% of patients with a negative scan. This practice pattern may result in unnecessary anticoagulation of many patients with a low pretest probability of PE who may have had false‐positive CTPA findings. In contrast, the rate of possible false‐negative diagnosis of PE was low, consistent with the results of several other studies.16

The use of CTPA is likely to increase because of the publication of multiple algorithms advocating that CTPA be the chief imaging study used in the diagnosis of PE.1014 These algorithms recommend serial testing on patients with a negative CTPA in order to minimize the false‐negative rate, but they do not require systematic follow‐up in patients with a positive scan, even if the pretest probability was low. In management trials, this approach resulted in a low false‐negative rate (1.0%‐1.8% at 3‐month follow‐up).1114 However, the rate of major bleeding in patients treated for PE was 3.2%‐6.0% at 3 months,1214 illustrating the potential risk of anticoagulating patients who may have false‐positive diagnoses. Furthermore, premature diagnostic closure after a CTPA positive for PE may result in additional morbidity as a result of missing the true diagnosis.

One potential explanation for the large number of potential false‐positive emboli seen in low‐risk patients is that it is difficult to accurately diagnose distal pulmonary emboli with CTPA. The interrater reliability of CTPA for diagnosis of subsegmental PE is suboptimal,23 and the clinical significance of these emboli remains uncertain.24 Thus, many emboli found in patients with low pretest probability actually may have been subsegmental PE that would not have been diagnosed by another radiologist. As CTPA is more accurate for diagnosing central PE,25 clinicians should consider reviewing positive scans with the interpreting radiologist, especially when the pretest probability was low and the filling defects identified are in distal vessels.

Our results may also illustrate that clinicians have a lower treatment threshold when presented with apparently definitive evidence of pulmonary embolism. Previous proposals on the appropriate treatment threshold for PE, which used Bayesian decision‐making methods similar to ours,22 incorporated PIOPED26 data on the pretest probability of pulmonary embolism, the test characteristics of ventilation‐perfusion scans, and the clinical outcomes of patients in each test result/pretest probability category. However, there is no corresponding data for CTPA, as its test characteristics are still uncertain, and long‐term clinical outcomes have not been documented for patients treated (or not treated) on the basis of CT results.

Our study had several limitations. First, charting bias potentially was introduced by our using a retrospective method of collecting data for calculating pretest probabilities. To address this potential bias, we collected data from the entire medical record, including information available at and preceding the time of the CT scan. We believe this method was effective, as the range of pretest probabilities and the prevalence of PE in our study were very similar to those seen in a number of prospective studies.1820, 26, 27 Although other risk indices exist, the Wells score has been shown to have predictive powers equal to other algorithms and to clinicians; implicit assessments.28, 29 In our cohort, 35.1% of patients with a high pretest probability were diagnosed with PE; although this was lower than that in the initial Wells cohort,18 it was very similar to a subsequent validation study using the Wells algorithm, in which the prevalence of PE in patients with high pretest probability was 37.5%.27 Plasma D‐dimer testing is not routinely used at our hospitals, but it is a component of some CTPA‐based diagnostic algorithms.1114 Although use of D‐dimer testing may have led to fewer scans in patients with negative D‐dimer test results and low pretest probability,30 the high false‐positive rate for D‐dimer assays31 makes it difficult to predict the effect of widespread D‐dimer use on the overall pretest probability distribution. Using our assumptions about CT test characteristics, a pretest probability of more than 30% is required to generate a posttest probability of PE of at least 90% (the traditional treatment threshold for anticoagulant therapy22) with a positive scan. Extensive D‐dimer use would be unlikely to cause such a shift in the distribution of pretest probabilities.

Finally, CT technology has continued to advance, and many institutions now use 64‐slice scanners32 in contrast to the single‐slice scanners in use at the time our data were collected. Our assumptions were that CTPA has a positive likelihood ratio of 18.0 and a negative likelihood ratio of 0.1 (corresponding to a sensitivity of 90% and a specificity of 95%), although many studies of single‐detector CTPA found less impressive values.5, 7 Multidetector CT is thought to be more accurate than was earlier technology, but the true diagnostic performance of multidetector CT is not yet known. However, our findings pertain primarily to clinicians' responses to test results, so even if newer scanners are more accurate, Bayesian analysis will still be required in order to appropriately treat patients. A recent meta‐analysis of diagnostic strategies for PE found CTPA to have a positive likelihood ratio of 24.1, but even using this higher value, patients with a low pretest probability and positive CTPA still have a posttest probability of PE below the traditional treatment threshold.33 As most patients undergoing evaluation for suspected PE have a low pretest probability,17 a substantial number of false‐positive diagnoses of PE may still occur, even with a more accurate diagnostic test.

CT pulmonary angiography has become the first‐line test for pulmonary embolism at our institution, a situation likely mirrored elsewhere. CTPA is safe and rapid and offers the advantage of revealing ancillary lung findings that may be clinically significant.12 Although the test is an important addition to a clinician's diagnostic armamentarium, Bayesian analysis must be used to interpret its results, especially when CTPA is used as the first‐line diagnostic test. Our data raise the troubling concern that reliance on CTPA as the sole diagnostic test for suspected pulmonary embolism may result in a large number of patients with false‐positive CT scans receiving anticoagulation treatment.

References
  1. Trowbridge RL,Araoz PA,Gotway M,Bailey R,Auerbach AD.The impact of helical computed tomography on diagnostic and treatment strategies in patients with suspected pulmonary embolism.Am J Med.2004;116:8490.
  2. Stein PD,Kayali F,Olson RE.Trends in the use of diagnostic imaging in patients hospitalized with acute pulmonary embolism.Am J Cardiol.2004;93:13161317.
  3. Remy‐Jardin M,Remy J,Wattinne L,Giraud F.Central pulmonary thromboembolism: diagnosis with spiral volumetric CT with the single‐breath‐old technique—comparison with pulmonary angiography.Radiology.1992;185:381387.
  4. van Rossum AB,Pattynama PM,Ton ER, et al.Pulmonary embolism: validation of spiral CT angiography in 149 patients.Radiology.1996;201:467470.
  5. van Beek EJ,Brouwers EM,Song B,Bongaerts AH,Oudkerk M.Lung scintigraphy and helical computed tomography for the diagnosis of pulmonary embolism: a meta‐analysis.Clin Appl Thromb Hemost.2001;7(2):8792.
  6. Mullins MD,Becker DM,Hagspiel KD,Philbrick JT.The role of spiral volumetric computed tomography in the diagnosis of pulmonary embolism.Arch Intern Med.2000;160(3):293298.
  7. Rathbun SW,Raskob GE,Whitsett TL.Sensitivity and specificity of helical computed tomography in the diagnosis of pulmonary embolism: a systematic review.Ann Intern Med.2000;132(3):227232.
  8. Winer‐Muram HT,Rydberg J,Johnson MS, et al.Suspected acute pulmonary embolism: evaluation with multi‐detector row CT versus digital subtraction pulmonary arteriography.Radiology.2004;233:806815.
  9. Gottschalk A,Stein PD,Goodman LR,Sostman HD.Overview of Prospective Investigation of Pulmonary Embolism Diagnosis II.Semin Nucl Med.2002;32(3):173182.
  10. Ghanima W,Almaas V,Aballi S, et al.Management of suspected pulmonary embolism (PE) by D‐dimer and multi‐slice computed tomography in outpatients: an outcome study.J Thromb Haemost.2005;3:19261932.
  11. Perrier A,Roy PM,Sanchez O, et al.Multidetector‐row computed tomography in suspected pulmonary embolism.N Engl J Med.2005;352:17601768.
  12. van Strijen MJ,de Monye W,Schiereck J, et al.Single‐detector helical computed tomography as the primary diagnostic test in suspected pulmonary embolism: a multicenter clinical management study of 510 patients.Ann Intern Med.2003;138:307314.
  13. Musset D,Parent F,Meyer G, et al.Diagnostic strategy for patients with suspected pulmonary embolism: a prospective multicentre outcome study.Lancet.2002;260:19141920.
  14. Perrier A,Roy PM,Aujesky D, et al.Diagnosing pulmonary embolism in outpatients with clinical assessment, D‐dimer measurement, venous ultrasound, and helical computed tomography: a multicenter management study.Am J Med.2004;116:291299.
  15. Quiroz R,Kucher N,Zou KH, et al.Clinical validity of a negative computed tomography scan in patients with suspected pulmonary embolism: a systematic review.JAMA.2005;293:20122017.
  16. Moores LK,Jackson WL,Shorr AF,Jackson JL.Meta‐analysis: outcomes in patients with suspected pulmonary embolism managed with computed tomographic pulmonary angiography.Ann Intern Med.2004;141:866874.
  17. Fedullo PF,Tapson VF.Clinical Practice: The evaluation of suspected pulmonary embolism.N Engl J Med.2003;349:12471256.
  18. Wells PS,Ginsberg JS,Anderson DR, et al.Use of a clinical model for safe management of patients with suspected pulmonary embolism.Ann Intern Med.1998;129:9971005.
  19. Miniati M,Monti S,Bottai M.A structured clinical model for predicting the probability of pulmonary embolism.Am J Med.2003;114(3):173179.
  20. Wicki J,Perneger TV,Junod AF,Bounameaux H,Perrier A.Assessing clinical probability of pulmonary embolism in the emergency ward: a simple score.Arch Intern Med.2001;161(1):9297.
  21. Black E,Bordley D,Tape T,Panzer R.Interpretation of diagnostic tests and strategies for their use in quantitative decision making. In:Diagnostic strategies for common medical problems.Philadelphia, PA:American College of Physicians,1999.
  22. Stein PD,Hull RD,Saltzman HA,Pineo G.Strategy for diagnosis of patients with suspected acute pulmonary embolism.Chest.1993;103:15531559.
  23. Ruiz Y,Caballero P,Caniego JL, et al.Prospective comparison of helical CT with angiography in pulmonary embolism: global and selective vascular territory analysis. Interobserver agreement.Eur Radiol.2003;13:823829.
  24. Stein PD,Henry JW.Prevalence of acute pulmonary embolism among patients in a general hospital and at autopsy.Chest.1995;108:978981.
  25. Perrier A,Howarth N,Didier D, et al.Performance of helical computed tomography in unselected outpatients with suspected pulmonary embolism.Ann Intern Med.2001;135(2):8897.
  26. Value of the ventilation/perfusion scan in acute pulmonary embolism. Results of the prospective investigation of pulmonary embolism diagnosis (PIOPED).The PIOPED Investigators.JAMA.1990;263:27532759.
  27. Wells PS,Anderson DR,Rodger M, et al.Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and d‐dimer.Ann Intern Med.2001;135(2):98107.
  28. Chagnon I,Bounameaux H,Aujesky D, et al.Comparison of two clinical prediction rules and implicit assessment among patients with suspected pulmonary embolism.Am J Med.2002;113(4):269275.
  29. Chunilal SD,Eikelboom JW,Attia J, et al.Does this patient have pulmonary embolism?JAMA.2003;290:28492858.
  30. Kruip MJ,Leclercq MG,van der Heul C,Prins MH,Buller HR.Diagnostic strategies for excluding pulmonary embolism in clinical outcome studies. A systematic review.Ann Intern Med.2003;138:941951.
  31. Stein PD,Hull RD,Patel KC, et al.D‐dimer for the exclusion of acute venous thrombosis and pulmonary embolism: a systematic review.Ann Intern Med.2004;140:589602.
  32. Goldhaber SZ.Multislice computed tomography for pulmonary embolism—a technological marvel.N Engl J Med2005;352(17):18124.
  33. Roy PM,Colombet I,Durieux P,Chatellier G,Sors H,Meyer G.Systematic review and meta‐analysis of strategies for the diagnosis of suspected pulmonary embolism.Br Med J.2005;331:259.
References
  1. Trowbridge RL,Araoz PA,Gotway M,Bailey R,Auerbach AD.The impact of helical computed tomography on diagnostic and treatment strategies in patients with suspected pulmonary embolism.Am J Med.2004;116:8490.
  2. Stein PD,Kayali F,Olson RE.Trends in the use of diagnostic imaging in patients hospitalized with acute pulmonary embolism.Am J Cardiol.2004;93:13161317.
  3. Remy‐Jardin M,Remy J,Wattinne L,Giraud F.Central pulmonary thromboembolism: diagnosis with spiral volumetric CT with the single‐breath‐old technique—comparison with pulmonary angiography.Radiology.1992;185:381387.
  4. van Rossum AB,Pattynama PM,Ton ER, et al.Pulmonary embolism: validation of spiral CT angiography in 149 patients.Radiology.1996;201:467470.
  5. van Beek EJ,Brouwers EM,Song B,Bongaerts AH,Oudkerk M.Lung scintigraphy and helical computed tomography for the diagnosis of pulmonary embolism: a meta‐analysis.Clin Appl Thromb Hemost.2001;7(2):8792.
  6. Mullins MD,Becker DM,Hagspiel KD,Philbrick JT.The role of spiral volumetric computed tomography in the diagnosis of pulmonary embolism.Arch Intern Med.2000;160(3):293298.
  7. Rathbun SW,Raskob GE,Whitsett TL.Sensitivity and specificity of helical computed tomography in the diagnosis of pulmonary embolism: a systematic review.Ann Intern Med.2000;132(3):227232.
  8. Winer‐Muram HT,Rydberg J,Johnson MS, et al.Suspected acute pulmonary embolism: evaluation with multi‐detector row CT versus digital subtraction pulmonary arteriography.Radiology.2004;233:806815.
  9. Gottschalk A,Stein PD,Goodman LR,Sostman HD.Overview of Prospective Investigation of Pulmonary Embolism Diagnosis II.Semin Nucl Med.2002;32(3):173182.
  10. Ghanima W,Almaas V,Aballi S, et al.Management of suspected pulmonary embolism (PE) by D‐dimer and multi‐slice computed tomography in outpatients: an outcome study.J Thromb Haemost.2005;3:19261932.
  11. Perrier A,Roy PM,Sanchez O, et al.Multidetector‐row computed tomography in suspected pulmonary embolism.N Engl J Med.2005;352:17601768.
  12. van Strijen MJ,de Monye W,Schiereck J, et al.Single‐detector helical computed tomography as the primary diagnostic test in suspected pulmonary embolism: a multicenter clinical management study of 510 patients.Ann Intern Med.2003;138:307314.
  13. Musset D,Parent F,Meyer G, et al.Diagnostic strategy for patients with suspected pulmonary embolism: a prospective multicentre outcome study.Lancet.2002;260:19141920.
  14. Perrier A,Roy PM,Aujesky D, et al.Diagnosing pulmonary embolism in outpatients with clinical assessment, D‐dimer measurement, venous ultrasound, and helical computed tomography: a multicenter management study.Am J Med.2004;116:291299.
  15. Quiroz R,Kucher N,Zou KH, et al.Clinical validity of a negative computed tomography scan in patients with suspected pulmonary embolism: a systematic review.JAMA.2005;293:20122017.
  16. Moores LK,Jackson WL,Shorr AF,Jackson JL.Meta‐analysis: outcomes in patients with suspected pulmonary embolism managed with computed tomographic pulmonary angiography.Ann Intern Med.2004;141:866874.
  17. Fedullo PF,Tapson VF.Clinical Practice: The evaluation of suspected pulmonary embolism.N Engl J Med.2003;349:12471256.
  18. Wells PS,Ginsberg JS,Anderson DR, et al.Use of a clinical model for safe management of patients with suspected pulmonary embolism.Ann Intern Med.1998;129:9971005.
  19. Miniati M,Monti S,Bottai M.A structured clinical model for predicting the probability of pulmonary embolism.Am J Med.2003;114(3):173179.
  20. Wicki J,Perneger TV,Junod AF,Bounameaux H,Perrier A.Assessing clinical probability of pulmonary embolism in the emergency ward: a simple score.Arch Intern Med.2001;161(1):9297.
  21. Black E,Bordley D,Tape T,Panzer R.Interpretation of diagnostic tests and strategies for their use in quantitative decision making. In:Diagnostic strategies for common medical problems.Philadelphia, PA:American College of Physicians,1999.
  22. Stein PD,Hull RD,Saltzman HA,Pineo G.Strategy for diagnosis of patients with suspected acute pulmonary embolism.Chest.1993;103:15531559.
  23. Ruiz Y,Caballero P,Caniego JL, et al.Prospective comparison of helical CT with angiography in pulmonary embolism: global and selective vascular territory analysis. Interobserver agreement.Eur Radiol.2003;13:823829.
  24. Stein PD,Henry JW.Prevalence of acute pulmonary embolism among patients in a general hospital and at autopsy.Chest.1995;108:978981.
  25. Perrier A,Howarth N,Didier D, et al.Performance of helical computed tomography in unselected outpatients with suspected pulmonary embolism.Ann Intern Med.2001;135(2):8897.
  26. Value of the ventilation/perfusion scan in acute pulmonary embolism. Results of the prospective investigation of pulmonary embolism diagnosis (PIOPED).The PIOPED Investigators.JAMA.1990;263:27532759.
  27. Wells PS,Anderson DR,Rodger M, et al.Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and d‐dimer.Ann Intern Med.2001;135(2):98107.
  28. Chagnon I,Bounameaux H,Aujesky D, et al.Comparison of two clinical prediction rules and implicit assessment among patients with suspected pulmonary embolism.Am J Med.2002;113(4):269275.
  29. Chunilal SD,Eikelboom JW,Attia J, et al.Does this patient have pulmonary embolism?JAMA.2003;290:28492858.
  30. Kruip MJ,Leclercq MG,van der Heul C,Prins MH,Buller HR.Diagnostic strategies for excluding pulmonary embolism in clinical outcome studies. A systematic review.Ann Intern Med.2003;138:941951.
  31. Stein PD,Hull RD,Patel KC, et al.D‐dimer for the exclusion of acute venous thrombosis and pulmonary embolism: a systematic review.Ann Intern Med.2004;140:589602.
  32. Goldhaber SZ.Multislice computed tomography for pulmonary embolism—a technological marvel.N Engl J Med2005;352(17):18124.
  33. Roy PM,Colombet I,Durieux P,Chatellier G,Sors H,Meyer G.Systematic review and meta‐analysis of strategies for the diagnosis of suspected pulmonary embolism.Br Med J.2005;331:259.
Issue
Journal of Hospital Medicine - 1(2)
Issue
Journal of Hospital Medicine - 1(2)
Page Number
81-87
Page Number
81-87
Publications
Publications
Article Type
Display Headline
Impact of reliance on CT pulmonary angiography on diagnosis of pulmonary embolism: A Bayesian analysis
Display Headline
Impact of reliance on CT pulmonary angiography on diagnosis of pulmonary embolism: A Bayesian analysis
Legacy Keywords
pulmonary embolism, CT pulmonary angiography, Bayes' theorem, diagnosis
Legacy Keywords
pulmonary embolism, CT pulmonary angiography, Bayes' theorem, diagnosis
Sections
Article Source

Copyright © 2006 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Department of Medicine, University of California San Francisco, 533 Parnassus Avenue, Box 0131, San Francisco, CA 94143‐0131; Fax: (415) 514‐2094
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media