The average American adult reads at an 8th-grade level.1 Limited general literacy can affect health literacy, which is defined as the “degree to which individuals have the capacity to obtain, process and understand basic health information and services needed to make appropriate health decisions.”2,3 Adults with limited health literacy are at risk for poorer outcomes, including overuse of the emergency department and lower adherence to preventive care recommendations.4
Children transitioning from hospital to home depend on their adult caregivers (and their caregivers’ health literacy) to carry out discharge instructions. During the immediate postdischarge period, complex care needs can involve new or changed medications, follow-up instructions, home care instructions, and suggestions regarding when and why to seek additional care.
The discharge education provided to patients in the hospital is often subpar because of lack of standardization and divided responsibility among providers.5 Communication of vital information to patients with low health literacy has been noted to be particularly poor,6 as many patient education materials are written at 10th-, 11th-, and 12th-grade reading levels.4 Evidence supports providing materials written at 6th-grade level or lower to increase comprehension.7 Several studies have evaluated the quality and readability of discharge instructions for hospitalized adults,8,9 and one study found a link between poorly written instructions for adult patients and readmission risk.10 Less is known about readability in pediatrics, in which education may be more important for families of children most commonly hospitalized for acute illness.
We conducted a study to describe readability levels, understandability scores, and completeness of written instructions given to families at hospital discharge.
METHODS
Study Design and Setting
In this study, we performed a cross-sectional review of discharge instructions within electronic health records at Cincinnati Children’s Hospital Medical Center (CCHMC). The study was reviewed and approved by CCHMC’s Institutional Review Board. Charts were randomly selected from all hospital medicine service discharges during two 3-month periods of high patient volume: January-March 2014 and January-March 2015.
CCHMC is a large urban academic referral center that is the sole provider of general, subspecialty, and critical pediatric inpatient care for a large geographical area. CCHMC, which has 600 beds, provides cares for many children who live in impoverished settings. Its hospital medicine service consists of 4 teams that care for approximately 7000 children hospitalized with general pediatric illnesses each year. Each team consists of 5 or 6 pediatric residents supervised by a hospital medicine attending.
Providers, most commonly pediatric interns, generate discharge instructions in electronic health records. In this nonautomated process, they use free-text or nonstandardized templates to create content. At discharge, instructions are printed as part of the postvisit summary, which includes updates on medications and scheduled follow-up appointments. Bedside nurses verbally review the instructions with families and provide printed copies for home use.
Data Collection and Analysis
A random sequence generator was used to select charts for review. Instructions written in a language other than English were excluded. Written discharge instructions and clinical information, including age, sex, primary diagnosis, insurance type, number of discharge medications, number of scheduled appointments at discharge, and hospital length of stay, were abstracted from electronic health records and anonymized before analysis. The primary outcomes assessed were discharge instruction readability, understandability, and completeness. Readability was calculated with Fry Readability Scale (FRS) scores,11 which range from 1 to 17 and correspond to reading levels (score 1 = 1st-grade reading level). Health literacy experts have used the FRS to assess readability in health care environments.12
Understandability was measured with the Patient Education Materials Assessment Tool (PEMAT), a validated scoring system provided by the Agency for Healthcare Research and Quality.13 The PEMAT measures the understandability of print materials on a scale ranging from 0% to 100%. Higher scores indicate increased understandability, and scores under 70% indicate instructions are difficult to understand.
Although recent efforts have focused on the development of quality metrics for hospital-to-home transitions of pediatric patients,14 during our study there were no standard items to include in pediatric discharge instructions. Five criteria for completeness were determined by consensus of 3 pediatric hospital medicine faculty and were informed by qualitative results of work performed at our institution—work in which families noted challenges with information overload and a desire for pertinent and usable information that would enhance caregiver confidence and discharge preparedness.15 The criteria included statement of diagnosis, description of diagnosis, signs and symptoms indicative of the need for escalation of care (warning signs), the person caregivers should call if worried, and contact information for the primary care provider, subspecialist, and/or emergency department. Each set of discharge instructions was manually evaluated for completeness (presence of each individual component, number of components present, presence of all components). All charts were scored by the same investigator. A convenience sample of 20 charts was evaluated by a different investigator to ensure rating parameters were clear and classification was consistent (defined as perfect agreement). If the primary rater was undecided on a discharge instruction score, the secondary rater rated the instruction, and consensus was reached.
Means, medians, and ranges were calculated to enumerate the distribution of readability levels, understandability scores, and completeness of discharge instructions. Instructions were classified as readable if the FRS score was 6 or under, as understandable if the PEMAT score was under 70%, and as complete if all 5 criteria were satisfied. Descriptive statistics were generated for all demographic and clinical variables.
Table 1
RESULTS
Of the study period’s 3819 discharges, 200 were randomly selected for review. Table 1 lists the demographic and clinical information of patients included in the analyses. Median FRS score was 10, indicating a 10th-grade reading level (interquartile range, 8-12; range, 1-13) (Table 2). Only 14 (7%) of 200 discharge instructions had a score of 6 or under. Median PEMAT understandability score was 73% (interquartile range, 64%-82%), and 36% of instructions had a PEMAT score under 70%. No instruction satisfied all 5 of the defined characteristics of complete discharge instructions (Table 2).
Table 2
DISCUSSION
To our knowledge, this is the first study of the readability, understandability, and completeness of discharge instructions in a pediatric population. We found that the majority of discharge instruction readability levels were 10th grade or higher, that many instructions were difficult to understand, and that important information was missing from many instructions.
Discharge instruction readability levels were higher than the literacy level of many families in surrounding communities. The high school dropout rates in Cincinnati are staggering; they range from 22% to 64% in the 10 neighborhoods with the largest proportion of residents not completing high school.16 However, such findings are not unique to Cincinnati; low literacy is prevalent throughout the United States. Caregivers with limited literacy skills may struggle to navigate complex health systems, understand medical instructions and anticipatory guidance, perform child care and self-care tasks, and understand issues related to consent, medical authorization, and risk communication.17
Although readability is important, other factors also correlate with comprehension and execution of discharge tasks.18 Information must be understandable, or presented in a way that makes sense and can inform appropriate action. In many cases in our study, instructions were incomplete, despite previous investigators’ emphasizing caregivers’ desire and need for written instructions that are complete, informative, and inclusive of clearly outlined contingency plans.15,19 In addition, families may differ in the level of support needed after discharge; standardizing elements and including families in the development of discharge instructions may improve communication.8
This study had several limitations. First, the discharge instructions randomly selected for review were all written during the winter months. As the census on the hospital medicine teams is particularly high during that time, authors with competing responsibilities may not have had enough time to write effective discharge instructions then. We selected the winter period in order to capture real-world instructions written during a busy clinical time, when providers care for a high volume of patients. Second, caregiver health literacy and English-language proficiency were not assessed, and information regarding caregivers’ race/ethnicity, educational attainment, and socioeconomic status was unavailable. Third, interrater agreement was not formally evaluated. Fourth, this was a single-center study with results that may not be generalizable.
In conclusion, discharge instructions for pediatric patients are often difficult to read and understand, and incomplete. Efforts to address these communication gaps—including educational initiatives for physician trainees focused on health literacy, and quality improvement work directed at standardization and creation of readable, understandable, and complete discharge instructions—are crucial in providing safe, high-value care. Researchers need to evaluate the relationship between discharge instruction quality and outcomes, including unplanned office visits, emergency department visits, and readmissions.
Disclosure
Nothing to report.
References
1. Kutner MA, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy. Washington, DC: US Dept of Education, National Center for Education Statistics; 2006. NCES publication 2006-483. https://nces.ed.gov/pubs2006/2006483.pdf. Published September 2006. Accessed December 21, 2016. 2. Ratzan SC, Parker RM. Introduction. In: Selden CR, Zorn M, Ratzan S, Parker RM, eds. National Library of Medicine Current Bibliographies in Medicine: Health Literacy. Bethesda, MD: US Dept of Health and Human Services, National Institutes of Health; 2000:v-vi. NLM publication CBM 2000-1. https://www.nlm.nih.gov/archive//20061214/pubs/cbm/hliteracy.pdf. Published February 2000. Accessed December 21, 2016. 3. Arora VM, Schaninger C, D’Arcy M, et al. Improving inpatients’ identification of their doctors: use of FACE cards. Jt Comm J Qual Patient Saf. 2009;35(12):613-619. PubMed 4. Berkman ND, Sheridan SL, Donahue KE, et al. Health literacy interventions and outcomes: an updated systematic review. Evid Rep Technol Assess (Full Rep). 2011;(199):1-941. PubMed 5. Ashbrook L, Mourad M, Sehgal N. Communicating discharge instructions to patients: a survey of nurse, intern, and hospitalist practices. J Hosp Med. 2013;8(1):36-41. PubMed 6. Kripalani S, Jacobson TA, Mugalla IC, Cawthon CR, Niesner KJ, Vaccarino V. Health literacy and the quality of physician–patient communication during hospitalization. J Hosp Med. 2010;5(5):269-275. PubMed 7. Nielsen-Bohlman L, Panzer AM, Kindig DA, eds; Committee on Health Literacy, Board on Neuroscience and Behavioral Health, Institute of Medicine. Health Literacy: A Prescription to End Confusion. Washington, DC: National Academies Press; 2004. 8. Hahn-Goldberg S, Okrainec K, Huynh T, Zahr N, Abrams H. Co-creating patient-oriented discharge instructions with patients, caregivers, and healthcare providers. J Hosp Med. 2015;10(12):804-807. PubMed 9. Lauster CD, Gibson JM, DiNella JV, DiNardo M, Korytkowski MT, Donihi AC. Implementation of standardized instructions for insulin at hospital discharge. J Hosp Med. 2009;4(8):E41-E42. PubMed 10. Howard-Anderson J, Busuttil A, Lonowski S, Vangala S, Afsar-Manesh N. From discharge to readmission: understanding the process from the patient perspective. J Hosp Med. 2016;11(6):407-412. PubMed 11. Fry E. A readability formula that saves time. J Reading. 1968;11:513-516, 575-578. 12. D’Alessandro DM, Kingsley P, Johnson-West J. The readability of pediatric patient education materials on the World Wide Web. Arch Pediatr Adolesc Med. 2001;155(7):807-812. PubMed 13. Shoemaker SJ, Wolf MS, Brach C. The Patient Education Materials Assessment Tool (PEMAT) and User’s Guide: An Instrument to Assess the Understandability and Actionability of Print and Audiovisual Patient Education Materials. Rockville, MD: US Dept of Health and Human Services, Agency for Healthcare Research and Quality; 2013. http://www.ahrq.gov/professionals/prevention-chronic-care/improve/self-mgmt/pemat/index.html. Published October 2013. Accessed November 27, 2013. 14. Leyenaar JK, Desai AD, Burkhart Q, et al. Quality measures to assess care transitions for hospitalized children. Pediatrics. 2016;138(2). PubMed 15. Solan LG, Beck AF, Brunswick SA, et al; H2O Study Group. The family perspective on hospital to home transitions: a qualitative study. Pediatrics. 2015;136(6):e1539-e1549. PubMed 16. Maloney M, Auffrey C. The Social Areas of Cincinnati: An Analysis of Social Needs: Patterns for Five Census Decades. 5th ed. Cincinnati, OH: University of Cincinnati School of Planning/United Way/University of Cincinnati Community Research Collaborative; 2013. http://www.socialareasofcincinnati.org/files/FifthEdition/SASBook.pdf. Published April 2013. Accessed December 21, 2016. 17. Rothman RL, Yin HS, Mulvaney S, Co JP, Homer C, Lannon C. Health literacy and quality: focus on chronic illness care and patient safety. Pediatrics. 2009;124(suppl 3):S315-S326. PubMed 18. Moon RY, Cheng TL, Patel KM, Baumhaft K, Scheidt PC. Parental literacy level and understanding of medical information. Pediatrics. 1998;102(2):e25. PubMed 19. Desai AD, Durkin LK, Jacob-Files EA, Mangione-Smith R. Caregiver perceptions of hospital to home transitions according to medical complexity: a qualitative study. Acad Pediatr. 2016;16(2):136-144. PubMed
The average American adult reads at an 8th-grade level.1 Limited general literacy can affect health literacy, which is defined as the “degree to which individuals have the capacity to obtain, process and understand basic health information and services needed to make appropriate health decisions.”2,3 Adults with limited health literacy are at risk for poorer outcomes, including overuse of the emergency department and lower adherence to preventive care recommendations.4
Children transitioning from hospital to home depend on their adult caregivers (and their caregivers’ health literacy) to carry out discharge instructions. During the immediate postdischarge period, complex care needs can involve new or changed medications, follow-up instructions, home care instructions, and suggestions regarding when and why to seek additional care.
The discharge education provided to patients in the hospital is often subpar because of lack of standardization and divided responsibility among providers.5 Communication of vital information to patients with low health literacy has been noted to be particularly poor,6 as many patient education materials are written at 10th-, 11th-, and 12th-grade reading levels.4 Evidence supports providing materials written at 6th-grade level or lower to increase comprehension.7 Several studies have evaluated the quality and readability of discharge instructions for hospitalized adults,8,9 and one study found a link between poorly written instructions for adult patients and readmission risk.10 Less is known about readability in pediatrics, in which education may be more important for families of children most commonly hospitalized for acute illness.
We conducted a study to describe readability levels, understandability scores, and completeness of written instructions given to families at hospital discharge.
METHODS
Study Design and Setting
In this study, we performed a cross-sectional review of discharge instructions within electronic health records at Cincinnati Children’s Hospital Medical Center (CCHMC). The study was reviewed and approved by CCHMC’s Institutional Review Board. Charts were randomly selected from all hospital medicine service discharges during two 3-month periods of high patient volume: January-March 2014 and January-March 2015.
CCHMC is a large urban academic referral center that is the sole provider of general, subspecialty, and critical pediatric inpatient care for a large geographical area. CCHMC, which has 600 beds, provides cares for many children who live in impoverished settings. Its hospital medicine service consists of 4 teams that care for approximately 7000 children hospitalized with general pediatric illnesses each year. Each team consists of 5 or 6 pediatric residents supervised by a hospital medicine attending.
Providers, most commonly pediatric interns, generate discharge instructions in electronic health records. In this nonautomated process, they use free-text or nonstandardized templates to create content. At discharge, instructions are printed as part of the postvisit summary, which includes updates on medications and scheduled follow-up appointments. Bedside nurses verbally review the instructions with families and provide printed copies for home use.
Data Collection and Analysis
A random sequence generator was used to select charts for review. Instructions written in a language other than English were excluded. Written discharge instructions and clinical information, including age, sex, primary diagnosis, insurance type, number of discharge medications, number of scheduled appointments at discharge, and hospital length of stay, were abstracted from electronic health records and anonymized before analysis. The primary outcomes assessed were discharge instruction readability, understandability, and completeness. Readability was calculated with Fry Readability Scale (FRS) scores,11 which range from 1 to 17 and correspond to reading levels (score 1 = 1st-grade reading level). Health literacy experts have used the FRS to assess readability in health care environments.12
Understandability was measured with the Patient Education Materials Assessment Tool (PEMAT), a validated scoring system provided by the Agency for Healthcare Research and Quality.13 The PEMAT measures the understandability of print materials on a scale ranging from 0% to 100%. Higher scores indicate increased understandability, and scores under 70% indicate instructions are difficult to understand.
Although recent efforts have focused on the development of quality metrics for hospital-to-home transitions of pediatric patients,14 during our study there were no standard items to include in pediatric discharge instructions. Five criteria for completeness were determined by consensus of 3 pediatric hospital medicine faculty and were informed by qualitative results of work performed at our institution—work in which families noted challenges with information overload and a desire for pertinent and usable information that would enhance caregiver confidence and discharge preparedness.15 The criteria included statement of diagnosis, description of diagnosis, signs and symptoms indicative of the need for escalation of care (warning signs), the person caregivers should call if worried, and contact information for the primary care provider, subspecialist, and/or emergency department. Each set of discharge instructions was manually evaluated for completeness (presence of each individual component, number of components present, presence of all components). All charts were scored by the same investigator. A convenience sample of 20 charts was evaluated by a different investigator to ensure rating parameters were clear and classification was consistent (defined as perfect agreement). If the primary rater was undecided on a discharge instruction score, the secondary rater rated the instruction, and consensus was reached.
Means, medians, and ranges were calculated to enumerate the distribution of readability levels, understandability scores, and completeness of discharge instructions. Instructions were classified as readable if the FRS score was 6 or under, as understandable if the PEMAT score was under 70%, and as complete if all 5 criteria were satisfied. Descriptive statistics were generated for all demographic and clinical variables.
Table 1
RESULTS
Of the study period’s 3819 discharges, 200 were randomly selected for review. Table 1 lists the demographic and clinical information of patients included in the analyses. Median FRS score was 10, indicating a 10th-grade reading level (interquartile range, 8-12; range, 1-13) (Table 2). Only 14 (7%) of 200 discharge instructions had a score of 6 or under. Median PEMAT understandability score was 73% (interquartile range, 64%-82%), and 36% of instructions had a PEMAT score under 70%. No instruction satisfied all 5 of the defined characteristics of complete discharge instructions (Table 2).
Table 2
DISCUSSION
To our knowledge, this is the first study of the readability, understandability, and completeness of discharge instructions in a pediatric population. We found that the majority of discharge instruction readability levels were 10th grade or higher, that many instructions were difficult to understand, and that important information was missing from many instructions.
Discharge instruction readability levels were higher than the literacy level of many families in surrounding communities. The high school dropout rates in Cincinnati are staggering; they range from 22% to 64% in the 10 neighborhoods with the largest proportion of residents not completing high school.16 However, such findings are not unique to Cincinnati; low literacy is prevalent throughout the United States. Caregivers with limited literacy skills may struggle to navigate complex health systems, understand medical instructions and anticipatory guidance, perform child care and self-care tasks, and understand issues related to consent, medical authorization, and risk communication.17
Although readability is important, other factors also correlate with comprehension and execution of discharge tasks.18 Information must be understandable, or presented in a way that makes sense and can inform appropriate action. In many cases in our study, instructions were incomplete, despite previous investigators’ emphasizing caregivers’ desire and need for written instructions that are complete, informative, and inclusive of clearly outlined contingency plans.15,19 In addition, families may differ in the level of support needed after discharge; standardizing elements and including families in the development of discharge instructions may improve communication.8
This study had several limitations. First, the discharge instructions randomly selected for review were all written during the winter months. As the census on the hospital medicine teams is particularly high during that time, authors with competing responsibilities may not have had enough time to write effective discharge instructions then. We selected the winter period in order to capture real-world instructions written during a busy clinical time, when providers care for a high volume of patients. Second, caregiver health literacy and English-language proficiency were not assessed, and information regarding caregivers’ race/ethnicity, educational attainment, and socioeconomic status was unavailable. Third, interrater agreement was not formally evaluated. Fourth, this was a single-center study with results that may not be generalizable.
In conclusion, discharge instructions for pediatric patients are often difficult to read and understand, and incomplete. Efforts to address these communication gaps—including educational initiatives for physician trainees focused on health literacy, and quality improvement work directed at standardization and creation of readable, understandable, and complete discharge instructions—are crucial in providing safe, high-value care. Researchers need to evaluate the relationship between discharge instruction quality and outcomes, including unplanned office visits, emergency department visits, and readmissions.
Disclosure
Nothing to report.
The average American adult reads at an 8th-grade level.1 Limited general literacy can affect health literacy, which is defined as the “degree to which individuals have the capacity to obtain, process and understand basic health information and services needed to make appropriate health decisions.”2,3 Adults with limited health literacy are at risk for poorer outcomes, including overuse of the emergency department and lower adherence to preventive care recommendations.4
Children transitioning from hospital to home depend on their adult caregivers (and their caregivers’ health literacy) to carry out discharge instructions. During the immediate postdischarge period, complex care needs can involve new or changed medications, follow-up instructions, home care instructions, and suggestions regarding when and why to seek additional care.
The discharge education provided to patients in the hospital is often subpar because of lack of standardization and divided responsibility among providers.5 Communication of vital information to patients with low health literacy has been noted to be particularly poor,6 as many patient education materials are written at 10th-, 11th-, and 12th-grade reading levels.4 Evidence supports providing materials written at 6th-grade level or lower to increase comprehension.7 Several studies have evaluated the quality and readability of discharge instructions for hospitalized adults,8,9 and one study found a link between poorly written instructions for adult patients and readmission risk.10 Less is known about readability in pediatrics, in which education may be more important for families of children most commonly hospitalized for acute illness.
We conducted a study to describe readability levels, understandability scores, and completeness of written instructions given to families at hospital discharge.
METHODS
Study Design and Setting
In this study, we performed a cross-sectional review of discharge instructions within electronic health records at Cincinnati Children’s Hospital Medical Center (CCHMC). The study was reviewed and approved by CCHMC’s Institutional Review Board. Charts were randomly selected from all hospital medicine service discharges during two 3-month periods of high patient volume: January-March 2014 and January-March 2015.
CCHMC is a large urban academic referral center that is the sole provider of general, subspecialty, and critical pediatric inpatient care for a large geographical area. CCHMC, which has 600 beds, provides cares for many children who live in impoverished settings. Its hospital medicine service consists of 4 teams that care for approximately 7000 children hospitalized with general pediatric illnesses each year. Each team consists of 5 or 6 pediatric residents supervised by a hospital medicine attending.
Providers, most commonly pediatric interns, generate discharge instructions in electronic health records. In this nonautomated process, they use free-text or nonstandardized templates to create content. At discharge, instructions are printed as part of the postvisit summary, which includes updates on medications and scheduled follow-up appointments. Bedside nurses verbally review the instructions with families and provide printed copies for home use.
Data Collection and Analysis
A random sequence generator was used to select charts for review. Instructions written in a language other than English were excluded. Written discharge instructions and clinical information, including age, sex, primary diagnosis, insurance type, number of discharge medications, number of scheduled appointments at discharge, and hospital length of stay, were abstracted from electronic health records and anonymized before analysis. The primary outcomes assessed were discharge instruction readability, understandability, and completeness. Readability was calculated with Fry Readability Scale (FRS) scores,11 which range from 1 to 17 and correspond to reading levels (score 1 = 1st-grade reading level). Health literacy experts have used the FRS to assess readability in health care environments.12
Understandability was measured with the Patient Education Materials Assessment Tool (PEMAT), a validated scoring system provided by the Agency for Healthcare Research and Quality.13 The PEMAT measures the understandability of print materials on a scale ranging from 0% to 100%. Higher scores indicate increased understandability, and scores under 70% indicate instructions are difficult to understand.
Although recent efforts have focused on the development of quality metrics for hospital-to-home transitions of pediatric patients,14 during our study there were no standard items to include in pediatric discharge instructions. Five criteria for completeness were determined by consensus of 3 pediatric hospital medicine faculty and were informed by qualitative results of work performed at our institution—work in which families noted challenges with information overload and a desire for pertinent and usable information that would enhance caregiver confidence and discharge preparedness.15 The criteria included statement of diagnosis, description of diagnosis, signs and symptoms indicative of the need for escalation of care (warning signs), the person caregivers should call if worried, and contact information for the primary care provider, subspecialist, and/or emergency department. Each set of discharge instructions was manually evaluated for completeness (presence of each individual component, number of components present, presence of all components). All charts were scored by the same investigator. A convenience sample of 20 charts was evaluated by a different investigator to ensure rating parameters were clear and classification was consistent (defined as perfect agreement). If the primary rater was undecided on a discharge instruction score, the secondary rater rated the instruction, and consensus was reached.
Means, medians, and ranges were calculated to enumerate the distribution of readability levels, understandability scores, and completeness of discharge instructions. Instructions were classified as readable if the FRS score was 6 or under, as understandable if the PEMAT score was under 70%, and as complete if all 5 criteria were satisfied. Descriptive statistics were generated for all demographic and clinical variables.
Table 1
RESULTS
Of the study period’s 3819 discharges, 200 were randomly selected for review. Table 1 lists the demographic and clinical information of patients included in the analyses. Median FRS score was 10, indicating a 10th-grade reading level (interquartile range, 8-12; range, 1-13) (Table 2). Only 14 (7%) of 200 discharge instructions had a score of 6 or under. Median PEMAT understandability score was 73% (interquartile range, 64%-82%), and 36% of instructions had a PEMAT score under 70%. No instruction satisfied all 5 of the defined characteristics of complete discharge instructions (Table 2).
Table 2
DISCUSSION
To our knowledge, this is the first study of the readability, understandability, and completeness of discharge instructions in a pediatric population. We found that the majority of discharge instruction readability levels were 10th grade or higher, that many instructions were difficult to understand, and that important information was missing from many instructions.
Discharge instruction readability levels were higher than the literacy level of many families in surrounding communities. The high school dropout rates in Cincinnati are staggering; they range from 22% to 64% in the 10 neighborhoods with the largest proportion of residents not completing high school.16 However, such findings are not unique to Cincinnati; low literacy is prevalent throughout the United States. Caregivers with limited literacy skills may struggle to navigate complex health systems, understand medical instructions and anticipatory guidance, perform child care and self-care tasks, and understand issues related to consent, medical authorization, and risk communication.17
Although readability is important, other factors also correlate with comprehension and execution of discharge tasks.18 Information must be understandable, or presented in a way that makes sense and can inform appropriate action. In many cases in our study, instructions were incomplete, despite previous investigators’ emphasizing caregivers’ desire and need for written instructions that are complete, informative, and inclusive of clearly outlined contingency plans.15,19 In addition, families may differ in the level of support needed after discharge; standardizing elements and including families in the development of discharge instructions may improve communication.8
This study had several limitations. First, the discharge instructions randomly selected for review were all written during the winter months. As the census on the hospital medicine teams is particularly high during that time, authors with competing responsibilities may not have had enough time to write effective discharge instructions then. We selected the winter period in order to capture real-world instructions written during a busy clinical time, when providers care for a high volume of patients. Second, caregiver health literacy and English-language proficiency were not assessed, and information regarding caregivers’ race/ethnicity, educational attainment, and socioeconomic status was unavailable. Third, interrater agreement was not formally evaluated. Fourth, this was a single-center study with results that may not be generalizable.
In conclusion, discharge instructions for pediatric patients are often difficult to read and understand, and incomplete. Efforts to address these communication gaps—including educational initiatives for physician trainees focused on health literacy, and quality improvement work directed at standardization and creation of readable, understandable, and complete discharge instructions—are crucial in providing safe, high-value care. Researchers need to evaluate the relationship between discharge instruction quality and outcomes, including unplanned office visits, emergency department visits, and readmissions.
Disclosure
Nothing to report.
References
1. Kutner MA, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy. Washington, DC: US Dept of Education, National Center for Education Statistics; 2006. NCES publication 2006-483. https://nces.ed.gov/pubs2006/2006483.pdf. Published September 2006. Accessed December 21, 2016. 2. Ratzan SC, Parker RM. Introduction. In: Selden CR, Zorn M, Ratzan S, Parker RM, eds. National Library of Medicine Current Bibliographies in Medicine: Health Literacy. Bethesda, MD: US Dept of Health and Human Services, National Institutes of Health; 2000:v-vi. NLM publication CBM 2000-1. https://www.nlm.nih.gov/archive//20061214/pubs/cbm/hliteracy.pdf. Published February 2000. Accessed December 21, 2016. 3. Arora VM, Schaninger C, D’Arcy M, et al. Improving inpatients’ identification of their doctors: use of FACE cards. Jt Comm J Qual Patient Saf. 2009;35(12):613-619. PubMed 4. Berkman ND, Sheridan SL, Donahue KE, et al. Health literacy interventions and outcomes: an updated systematic review. Evid Rep Technol Assess (Full Rep). 2011;(199):1-941. PubMed 5. Ashbrook L, Mourad M, Sehgal N. Communicating discharge instructions to patients: a survey of nurse, intern, and hospitalist practices. J Hosp Med. 2013;8(1):36-41. PubMed 6. Kripalani S, Jacobson TA, Mugalla IC, Cawthon CR, Niesner KJ, Vaccarino V. Health literacy and the quality of physician–patient communication during hospitalization. J Hosp Med. 2010;5(5):269-275. PubMed 7. Nielsen-Bohlman L, Panzer AM, Kindig DA, eds; Committee on Health Literacy, Board on Neuroscience and Behavioral Health, Institute of Medicine. Health Literacy: A Prescription to End Confusion. Washington, DC: National Academies Press; 2004. 8. Hahn-Goldberg S, Okrainec K, Huynh T, Zahr N, Abrams H. Co-creating patient-oriented discharge instructions with patients, caregivers, and healthcare providers. J Hosp Med. 2015;10(12):804-807. PubMed 9. Lauster CD, Gibson JM, DiNella JV, DiNardo M, Korytkowski MT, Donihi AC. Implementation of standardized instructions for insulin at hospital discharge. J Hosp Med. 2009;4(8):E41-E42. PubMed 10. Howard-Anderson J, Busuttil A, Lonowski S, Vangala S, Afsar-Manesh N. From discharge to readmission: understanding the process from the patient perspective. J Hosp Med. 2016;11(6):407-412. PubMed 11. Fry E. A readability formula that saves time. J Reading. 1968;11:513-516, 575-578. 12. D’Alessandro DM, Kingsley P, Johnson-West J. The readability of pediatric patient education materials on the World Wide Web. Arch Pediatr Adolesc Med. 2001;155(7):807-812. PubMed 13. Shoemaker SJ, Wolf MS, Brach C. The Patient Education Materials Assessment Tool (PEMAT) and User’s Guide: An Instrument to Assess the Understandability and Actionability of Print and Audiovisual Patient Education Materials. Rockville, MD: US Dept of Health and Human Services, Agency for Healthcare Research and Quality; 2013. http://www.ahrq.gov/professionals/prevention-chronic-care/improve/self-mgmt/pemat/index.html. Published October 2013. Accessed November 27, 2013. 14. Leyenaar JK, Desai AD, Burkhart Q, et al. Quality measures to assess care transitions for hospitalized children. Pediatrics. 2016;138(2). PubMed 15. Solan LG, Beck AF, Brunswick SA, et al; H2O Study Group. The family perspective on hospital to home transitions: a qualitative study. Pediatrics. 2015;136(6):e1539-e1549. PubMed 16. Maloney M, Auffrey C. The Social Areas of Cincinnati: An Analysis of Social Needs: Patterns for Five Census Decades. 5th ed. Cincinnati, OH: University of Cincinnati School of Planning/United Way/University of Cincinnati Community Research Collaborative; 2013. http://www.socialareasofcincinnati.org/files/FifthEdition/SASBook.pdf. Published April 2013. Accessed December 21, 2016. 17. Rothman RL, Yin HS, Mulvaney S, Co JP, Homer C, Lannon C. Health literacy and quality: focus on chronic illness care and patient safety. Pediatrics. 2009;124(suppl 3):S315-S326. PubMed 18. Moon RY, Cheng TL, Patel KM, Baumhaft K, Scheidt PC. Parental literacy level and understanding of medical information. Pediatrics. 1998;102(2):e25. PubMed 19. Desai AD, Durkin LK, Jacob-Files EA, Mangione-Smith R. Caregiver perceptions of hospital to home transitions according to medical complexity: a qualitative study. Acad Pediatr. 2016;16(2):136-144. PubMed
References
1. Kutner MA, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy. Washington, DC: US Dept of Education, National Center for Education Statistics; 2006. NCES publication 2006-483. https://nces.ed.gov/pubs2006/2006483.pdf. Published September 2006. Accessed December 21, 2016. 2. Ratzan SC, Parker RM. Introduction. In: Selden CR, Zorn M, Ratzan S, Parker RM, eds. National Library of Medicine Current Bibliographies in Medicine: Health Literacy. Bethesda, MD: US Dept of Health and Human Services, National Institutes of Health; 2000:v-vi. NLM publication CBM 2000-1. https://www.nlm.nih.gov/archive//20061214/pubs/cbm/hliteracy.pdf. Published February 2000. Accessed December 21, 2016. 3. Arora VM, Schaninger C, D’Arcy M, et al. Improving inpatients’ identification of their doctors: use of FACE cards. Jt Comm J Qual Patient Saf. 2009;35(12):613-619. PubMed 4. Berkman ND, Sheridan SL, Donahue KE, et al. Health literacy interventions and outcomes: an updated systematic review. Evid Rep Technol Assess (Full Rep). 2011;(199):1-941. PubMed 5. Ashbrook L, Mourad M, Sehgal N. Communicating discharge instructions to patients: a survey of nurse, intern, and hospitalist practices. J Hosp Med. 2013;8(1):36-41. PubMed 6. Kripalani S, Jacobson TA, Mugalla IC, Cawthon CR, Niesner KJ, Vaccarino V. Health literacy and the quality of physician–patient communication during hospitalization. J Hosp Med. 2010;5(5):269-275. PubMed 7. Nielsen-Bohlman L, Panzer AM, Kindig DA, eds; Committee on Health Literacy, Board on Neuroscience and Behavioral Health, Institute of Medicine. Health Literacy: A Prescription to End Confusion. Washington, DC: National Academies Press; 2004. 8. Hahn-Goldberg S, Okrainec K, Huynh T, Zahr N, Abrams H. Co-creating patient-oriented discharge instructions with patients, caregivers, and healthcare providers. J Hosp Med. 2015;10(12):804-807. PubMed 9. Lauster CD, Gibson JM, DiNella JV, DiNardo M, Korytkowski MT, Donihi AC. Implementation of standardized instructions for insulin at hospital discharge. J Hosp Med. 2009;4(8):E41-E42. PubMed 10. Howard-Anderson J, Busuttil A, Lonowski S, Vangala S, Afsar-Manesh N. From discharge to readmission: understanding the process from the patient perspective. J Hosp Med. 2016;11(6):407-412. PubMed 11. Fry E. A readability formula that saves time. J Reading. 1968;11:513-516, 575-578. 12. D’Alessandro DM, Kingsley P, Johnson-West J. The readability of pediatric patient education materials on the World Wide Web. Arch Pediatr Adolesc Med. 2001;155(7):807-812. PubMed 13. Shoemaker SJ, Wolf MS, Brach C. The Patient Education Materials Assessment Tool (PEMAT) and User’s Guide: An Instrument to Assess the Understandability and Actionability of Print and Audiovisual Patient Education Materials. Rockville, MD: US Dept of Health and Human Services, Agency for Healthcare Research and Quality; 2013. http://www.ahrq.gov/professionals/prevention-chronic-care/improve/self-mgmt/pemat/index.html. Published October 2013. Accessed November 27, 2013. 14. Leyenaar JK, Desai AD, Burkhart Q, et al. Quality measures to assess care transitions for hospitalized children. Pediatrics. 2016;138(2). PubMed 15. Solan LG, Beck AF, Brunswick SA, et al; H2O Study Group. The family perspective on hospital to home transitions: a qualitative study. Pediatrics. 2015;136(6):e1539-e1549. PubMed 16. Maloney M, Auffrey C. The Social Areas of Cincinnati: An Analysis of Social Needs: Patterns for Five Census Decades. 5th ed. Cincinnati, OH: University of Cincinnati School of Planning/United Way/University of Cincinnati Community Research Collaborative; 2013. http://www.socialareasofcincinnati.org/files/FifthEdition/SASBook.pdf. Published April 2013. Accessed December 21, 2016. 17. Rothman RL, Yin HS, Mulvaney S, Co JP, Homer C, Lannon C. Health literacy and quality: focus on chronic illness care and patient safety. Pediatrics. 2009;124(suppl 3):S315-S326. PubMed 18. Moon RY, Cheng TL, Patel KM, Baumhaft K, Scheidt PC. Parental literacy level and understanding of medical information. Pediatrics. 1998;102(2):e25. PubMed 19. Desai AD, Durkin LK, Jacob-Files EA, Mangione-Smith R. Caregiver perceptions of hospital to home transitions according to medical complexity: a qualitative study. Acad Pediatr. 2016;16(2):136-144. PubMed
Address for correspondence and reprint requests: Ndidi I. Unaka, MD, MEd, Division of Hospital Medicine, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Ave, ML 5018, Cincinnati, OH 45229; Telephone: 513-636-8354; Fax: 513-636-7905; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Frequent and prolonged fasting can lead to patient dissatisfaction and distress.1 It may also cause malnutrition and negatively affect outcomes in high-risk populations such as the elderly.2 Evidence suggests that patients are commonly kept fasting longer than necessary.3,4 However, the extent to which nil per os (NPO) orders are necessary or adhere to evidence-based duration is unknown.
Our study showed half of patients admitted to the general medicine services experienced a period of fasting, and 1 in 4 NPO orders may be avoidable.5 In this study, we aimed to provide action-oriented recommendations by 1) assessing why some interventions did not occur after NPO orders were placed and 2) analyzing NPO orders by indication and comparing them with the best available evidence.
METHOD
This retrospective study was conducted at an academic medical center in the United States. The study protocol was approved by the Mayo Clinic Institutional Review Board.
Detailed data handling and NPO order review processes have been described elsewhere.5 Briefly, we identified 1200 NPO orders of 120 or more minutes’ duration that were written for patients on the general medicine services at our institution in 2013. After blinded duplicate review, we excluded 70 orders written in the intensive care unit or on other services, 24 with unknown indications, 101 primarily indicated for clinical reasons, and 81 that had multiple indications. Consequently, 924 orders indicated for a single intervention (eg, imaging study, procedure, or operation) were included in the main analysis.
We assessed if the indicated intervention was performed. If performed, we recorded the time when the intervention was started. If not performed, we assessed reasons why it was not performed. We also performed exploratory analyses to investigate factors associated with performing the indicated intervention. The variables were 1) NPO starting at midnight, 2) NPO starting within 12 hours of admission, and 3) indication (eg, imaging study, procedure, or operation). We also conducted sensitivity analyses limited to 1 NPO order per patient (N = 673) to assess independence of the orders.
We then further categorized indications for the orders in detail and identified those with a sample size >10. This resulted in 779 orders that were included in the analysis by indication. We reviewed the literature by indication to determine suggested minimally required fasting durations to compare fasting duration in our patients to current evidence-based recommendations.
For descriptive statistics, we used median with interquartile range (IQR) for continuous variables and percentage for discrete variables; chi-square tests were used for comparison of discrete variables. All P values were two-tailed and P < 0.05 was considered significant.
RESULTS
Median length of 924 orders was 12.7 hours (IQR, 10.1-15.7 hours); 190 (20.1%), 577 (62.4%), and 157 (21.0%) orders were indicated for imaging studies, procedures, and operations, respectively. NPO started at midnight in 662 (71.6%) and within 12 hours of admission in 210 (22.7%) orders.
The indicated interventions were not performed in 183 (19.8%) orders, mostly as a result of a change in plan (75/183, 41.0%) or scheduling barriers (43/183, 23.5%). Plan changes occurred when, for example, input from a consulting service was obtained or the supervising physician decided not to pursue the intervention. Scheduling barriers included slots being unavailable and conflicts with other tasks/tests. Notably, only in 1 of 183 (0.5%) orders, the intervention was cancelled because the patient ate (Table 1).
Table 1
NPO orders starting at midnight were associated with higher likelihood of indicated interventions being performed (546/662, 82.5% vs. 195/262, 74.4%; P = 0.006), as were NPO orders starting more than 12 hours after admission (601/714, 84.2% vs. 140/210, 66.7%; P < 0.001). Imaging studies were more likely to be performed than procedures or operations (170/190, 89.5% vs. 452/577, 78.3% vs. 119/157, 75.8%; P = 0.001). These results were unchanged when the analyses were limited to 1 order per patient.
When analyzed by indication, the median durations of NPO orders ranged from 8.3 hours in kidney ultrasound to 13.9 hours in upper endoscopy. These were slightly shortened, most by 1 to 2 hours, when the duration was calculated from start of the order to initiation of the intervention. The literature review identified, for most indications, that the minimally required length of NPO were 2 to 4 hours, generally 6 to 8 hours shorter than the median NPO length in this study sample. Furthermore, for indications such as computed tomography with intravenous contrast and abdominal ultrasound, the literature suggested NPO may be unnecessary (Table 2).6-9,16-30
Table 2
DISCUSSION
We analyzed a comprehensive set of NPO orders written for interventions in medical inpatients at an academic medical center. NPO started at midnight in 71.6% of the analyzed orders. In 1 in 5 NPO orders, the indicated intervention was not performed largely due to a change in plan or scheduling barriers. In most NPO orders in which the indicated interventions were performed, patients were kept fasting either unnecessarily or much longer than needed. This study is the first of its kind in evaluating NPO-ordering practices across multiple indications and comparing them with the best available evidence.
These results suggest current NPO practice in the hospital is suboptimal, and limited literature measures the magnitude of this issue.6,7 An important aspect of our study findings is that, in a substantial number of NPO orders, the indicated interventions were not performed for seemingly avoidable reasons. These issues may be attributable to clinicians’ preemptive decisions or lack of knowledge, or inefficiency in the healthcare system. Minimizing anticipatory NPO may carry drawbacks such as delays in interventions, and limited evidence links excessive NPO with clinical outcomes (eg, length of stay, readmission, or death). However, from the patients’ perspective, it is important to be kept fasting only for clinical benefit. Hence, this calls for substantial improvement of NPO practices.
Furthermore, results indicated that the duration of most NPO orders was longer than the minimal duration currently suggested in the literature. Whereas strong evidence suggests that no longer than 2 hours of fasting is generally required for preoperative purposes,8 limited studies have evaluated the required length of NPO orders in imaging studies and procedures,9-11 which comprised most of the orders in the study cohort. For example, in upper endoscopy, 2 small studies suggested fasting for 1 or 2 hours may provide as good visualization as with the conventional 6 to 8 hours of fasting.9,10 In coronary angiography, a retrospective study demonstrated fasting may be unnecessary.11 Due to lack of robust evidence, guidelines for these interventions either do not specify the required length of fasting or have not changed the conventional recommendations for fasting, leading to large variations in fasting policies by institution.6,12 Therefore, more studies are needed to define required length of fasting for those indications and to measure the exact magnitude of excessive fasting in the hospital.
One of the limitations of this study is generalizability because NPO practice may considerably vary by institution as suggested in the literature.4,6,12 Conversely, studies have suggested that excessive fasting exists in other institutions.3,4,13 Thus, this study adds further evidence of the prevalence of suboptimal NPO practice to the literature and provides a benchmark that other institutions can refer to when evaluating their own NPO practice. Another limitation is the assumption that the evidence for minimally required NPO duration can be applied to our patient samples. Specifically, the American Society of Anesthesiologists guideline states that preoperative or preprocedural fasting may need to be longer than 2 hours for 1) patients with comorbidities that can affect gastric emptying or fluid volume such as obesity, diabetes, emergency care, and enteral tube feeding, and 2) patients in whom airway management might be difficult.8 We did not consider these possibilities, and as these conditions are prevalent in medical inpatients, we may be overstating the excessiveness of fasting orders. On the other hand, especially in patients with diabetes, prolonged fasting may cause harm by inducing hypoglycemia.14 Further, no study rigorously evaluated safety of shortening the fasting period for these subsets of patients. Therefore, it is necessary to establish optimal duration of NPO and to improve NPO ordering practice even in these patient subsets.
While more research is needed to define optimal duration of NPO for various interventions and specific subsets of patients and to establish linkage of excessive NPO with clinical outcomes, our data provide insights into immediate actions that can be taken by clinicians to improve NPO practices using our data as a benchmark. First, institutions can establish more robust practice guidelines or institutional protocols for NPO orders. Successful interventions have been reported,15 and breaking the habit of ordering NPO after midnight is certainly possible. We recommend each institution does so by indication, potentially through interdepartmental work groups involving appropriate departments such as radiology, surgery, and medicine. Second, institutional guidelines or protocols can be incorporated in the ordering system to enable appropriate NPO ordering. For example, at our institution, we are modifying the order screens for ultrasound-guided paracentesis and thoracentesis to indicate that NPO is not necessary for these procedures unless sedation is anticipated. We conclude that, at any institution, efforts in improving the NPO practice are urgently warranted to minimize unnecessary fasting.
Disclosures
This publication was supported by Grant Number UL1 TR000135 from the National Center for Advancing Translational Sciences (NCATS). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health. The authors report no financial conflicts of interest.
References
1. Carey SK, Conchin S, Bloomfield-Stone S. A qualitative study into the impact of fasting within a large tertiary hospital in Australia - the patients’ perspective. J Clin Nurs. 2015;24:1946-1954. PubMed
2. Kyriakos G, Calleja-Fernández A, Ávila-Turcios D, Cano-Rodríguez I, Ballesteros Pomar MD, Vidal-Casariego A. Prolonged fasting with fluid therapy is related to poorer outcomes in medical patients. Nutr Hosp. 2013;28:1710-1716. PubMed
3. Rycroft-Malone J, Seers K, Crichton N, et al. A pragmatic cluster randomised trial evaluating three implementation interventions. Implement Sci. 2012;7:80. PubMed
4. Breuer JP, Bosse G, Seifert S, et al. Pre-operative fasting: a nationwide survey of German anaesthesia departments. Acta Anaesthesiol Scand. 2010;54:313-320. PubMed
5. Sorita A, Thongprayoon C, Ahmed A, et al. Frequency and appropriateness of fasting orders in the hospital. Mayo Clin Proc. 2015;90:1225-1232. PubMed
6. Lee BY, Ok JJ, Abdelaziz Elsayed AA, Kim Y, Han DH. Preparative fasting for contrast-enhanced CT: reconsideration. Radiology. 2012;263:444-450. PubMed
7. Manchikanti L, Malla Y, Wargo BW, Fellows B. Preoperative fasting before interventional techniques: is it necessary or evidence-based? Pain Physician. 2011;14:459-467. PubMed
8. American Society of Anesthesiologists Committee. Practice guidelines for preoperative fasting and the use of pharmacologic agents to reduce the risk of pulmonary aspiration: application to healthy patients undergoing elective procedures: an updated report by the American Society of Anesthesiologists Committee on Standards and Practice Parameters. Anesthesiology. 2011;114:495-511. PubMed
9. Koeppe AT, Lubini M, Bonadeo NM, Moraes I Jr, Fornari F. Comfort, safety and quality of upper gastrointestinal endoscopy after 2 hours fasting: a randomized controlled trial. BMC Gastroenterol. 2013;13:158. PubMed
10. De Silva AP, Amarasiri L, Liyanage MN, Kottachchi D, Dassanayake AS, de Silva HJ. One-hour fast for water and six-hour fast for solids prior to endoscopy provides good endoscopic vision and results in minimum patient discomfort. J Gastroenterol Hepatol. 2009;24:1095-1097. PubMed
11. Hamid T, Aleem Q, Lau Y, et al. Pre-procedural fasting for coronary interventions: is it time to change practice? Heart. 2014;100:658-661. PubMed
12. Ahmed SU, Tonidandel W, Trella J, Martin NM, Chang Y. Peri-procedural protocols for interventional pain management techniques: a survey of US pain centers. Pain Physician. 2005;8:181-185. PubMed
13. Franklin GA, McClave SA, Hurt RT, et al. Physician-delivered malnutrition: why do patients receive nothing by mouth or a clear liquid diet in a university hospital setting? JPEN J Parenter Enteral Nutr. 2011;35:337-342. PubMed
14. Aldasouqi S, Sheikh A, Klosterman P, et al. Hypoglycemia in patients with diabetes who are fasting for laboratory blood tests: the Cape Girardeau Hypoglycemia En Route Prevention Program. Postgrad Med. 2013;125:136-143. PubMed
15. Aguilar-Nascimento JE, Salomão AB, Caporossi C, Diniz BN. Clinical benefits after the implementation of a multimodal perioperative protocol in elderly patients. Arq Gastroenterol. 2010;47:178-183. PubMed
16. Hilberath JN, Oakes DA, Shernan SK, Bulwer BE, D’Ambra MN, Eltzschig HK. Safety of transesophageal echocardiography. J Am Soc Echocardiogr. 2010;23: 1115-1127. PubMed
17. Hahn RT, Abraham T, Adams MS, et al. Guidelines for performing a comprehensive transesophageal echocardiographic examination: recommendations from the American Society of Echocardiography and the Society of Cardiovascular Anesthesiologists. J Am Soc Echocardiogr. 2013;26:921-964. PubMed
18. Sinan T, Leven H, Sheikh M. Is fasting a necessary preparation for abdominal ultrasound? BMC Med Imaging. 2003;3:1. PubMed
19. Garcia DA, Froes TR. Importance of fasting in preparing dogs for abdominal ultrasound examination of specific organs. J Small Anim Pract. 2014;55:630-634. PubMed
20. Kidney ultrasound. The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. Health Library, Johns Hopkins Medicine. Available at: http://www.hopkinsmedicine.org/healthlibrary/test_procedures/urology/kidney_ultrasound_92,P07709/. Accessed August 17, 2015.
21. Surasi DS, Bhambhvani P, Baldwin JA, Almodovar SE, O’Malley JP. 18F-FDG PET and PET/CT patient preparation: a review of the literature. J Nucl Med Technol. 2014;42:5-13. PubMed
22. Kang SH, Hyun JJ. Preparation and patient evaluation for safe gastrointestinal endoscopy. Clin Endosc. 2013;46:212-218. PubMed
23. Smith I, Kranke P, Murat I, et al. Perioperative fasting in adults and children: guidelines from the European Society of Anaesthesiology. Eur J Anaesthesiol. 2011;28:556-569. PubMed
24. ASGE Standards of Practice Committee, Saltzman JR, Cash BD, Pasha SF, et al. Bowel preparation before colonoscopy. Gastrointest Endosc. 2015;81:781-794. PubMed
25. Hassan C, Bretthauer M, Kaminski MF, et al; European Society of Gastrointestinal Endoscopy. Bowel preparation for colonoscopy: European Society of Gastrointestinal Endoscopy (ESGE) guideline. Endoscopy. 2013;45:142-150. PubMed
26. Du Rand IA, Blaikley J, Booton R, et al; British Thoracic Society Bronchoscopy Guideline Group. British Thoracic Society guideline for diagnostic flexible bronchoscopy in adults: accredited by NICE. Thorax. 2013;68(suppl 1):i1-i44. PubMed
27. Thoracentesis. The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. Health Library, Johns Hopkins Medicine. Available at: http://www.hopkinsmedicine.org/healthlibrary/test_procedures/pulmonary/thoracentesis_92,P07761/. Accessed August 18, 2015.
28. Runyon BA. Diagnostic and therapeutic abdominal paracentesis. UpToDate. Available at: http://www.uptodate.com/contents/diagnostic-and-therapeutic-abdominal-paracentesis. Published February 18, 2014. Accessed August 18, 2015.
29. Granata A, Fiorini F, Andrulli S, et al. Doppler ultrasound and renal artery stenosis: An overview. J Ultrasound. 2009;12:133-143. PubMed
30. Gerhard-Herman M, Gardin JM, Jaff M, et al. Guidelines for noninvasive vascular laboratory testing: a report from the American Society of Echocardiography and the Society for Vascular Medicine and Biology. Vasc Med. 2006;11:183-200. PubMed
Frequent and prolonged fasting can lead to patient dissatisfaction and distress.1 It may also cause malnutrition and negatively affect outcomes in high-risk populations such as the elderly.2 Evidence suggests that patients are commonly kept fasting longer than necessary.3,4 However, the extent to which nil per os (NPO) orders are necessary or adhere to evidence-based duration is unknown.
Our study showed half of patients admitted to the general medicine services experienced a period of fasting, and 1 in 4 NPO orders may be avoidable.5 In this study, we aimed to provide action-oriented recommendations by 1) assessing why some interventions did not occur after NPO orders were placed and 2) analyzing NPO orders by indication and comparing them with the best available evidence.
METHOD
This retrospective study was conducted at an academic medical center in the United States. The study protocol was approved by the Mayo Clinic Institutional Review Board.
Detailed data handling and NPO order review processes have been described elsewhere.5 Briefly, we identified 1200 NPO orders of 120 or more minutes’ duration that were written for patients on the general medicine services at our institution in 2013. After blinded duplicate review, we excluded 70 orders written in the intensive care unit or on other services, 24 with unknown indications, 101 primarily indicated for clinical reasons, and 81 that had multiple indications. Consequently, 924 orders indicated for a single intervention (eg, imaging study, procedure, or operation) were included in the main analysis.
We assessed if the indicated intervention was performed. If performed, we recorded the time when the intervention was started. If not performed, we assessed reasons why it was not performed. We also performed exploratory analyses to investigate factors associated with performing the indicated intervention. The variables were 1) NPO starting at midnight, 2) NPO starting within 12 hours of admission, and 3) indication (eg, imaging study, procedure, or operation). We also conducted sensitivity analyses limited to 1 NPO order per patient (N = 673) to assess independence of the orders.
We then further categorized indications for the orders in detail and identified those with a sample size >10. This resulted in 779 orders that were included in the analysis by indication. We reviewed the literature by indication to determine suggested minimally required fasting durations to compare fasting duration in our patients to current evidence-based recommendations.
For descriptive statistics, we used median with interquartile range (IQR) for continuous variables and percentage for discrete variables; chi-square tests were used for comparison of discrete variables. All P values were two-tailed and P < 0.05 was considered significant.
RESULTS
Median length of 924 orders was 12.7 hours (IQR, 10.1-15.7 hours); 190 (20.1%), 577 (62.4%), and 157 (21.0%) orders were indicated for imaging studies, procedures, and operations, respectively. NPO started at midnight in 662 (71.6%) and within 12 hours of admission in 210 (22.7%) orders.
The indicated interventions were not performed in 183 (19.8%) orders, mostly as a result of a change in plan (75/183, 41.0%) or scheduling barriers (43/183, 23.5%). Plan changes occurred when, for example, input from a consulting service was obtained or the supervising physician decided not to pursue the intervention. Scheduling barriers included slots being unavailable and conflicts with other tasks/tests. Notably, only in 1 of 183 (0.5%) orders, the intervention was cancelled because the patient ate (Table 1).
Table 1
NPO orders starting at midnight were associated with higher likelihood of indicated interventions being performed (546/662, 82.5% vs. 195/262, 74.4%; P = 0.006), as were NPO orders starting more than 12 hours after admission (601/714, 84.2% vs. 140/210, 66.7%; P < 0.001). Imaging studies were more likely to be performed than procedures or operations (170/190, 89.5% vs. 452/577, 78.3% vs. 119/157, 75.8%; P = 0.001). These results were unchanged when the analyses were limited to 1 order per patient.
When analyzed by indication, the median durations of NPO orders ranged from 8.3 hours in kidney ultrasound to 13.9 hours in upper endoscopy. These were slightly shortened, most by 1 to 2 hours, when the duration was calculated from start of the order to initiation of the intervention. The literature review identified, for most indications, that the minimally required length of NPO were 2 to 4 hours, generally 6 to 8 hours shorter than the median NPO length in this study sample. Furthermore, for indications such as computed tomography with intravenous contrast and abdominal ultrasound, the literature suggested NPO may be unnecessary (Table 2).6-9,16-30
Table 2
DISCUSSION
We analyzed a comprehensive set of NPO orders written for interventions in medical inpatients at an academic medical center. NPO started at midnight in 71.6% of the analyzed orders. In 1 in 5 NPO orders, the indicated intervention was not performed largely due to a change in plan or scheduling barriers. In most NPO orders in which the indicated interventions were performed, patients were kept fasting either unnecessarily or much longer than needed. This study is the first of its kind in evaluating NPO-ordering practices across multiple indications and comparing them with the best available evidence.
These results suggest current NPO practice in the hospital is suboptimal, and limited literature measures the magnitude of this issue.6,7 An important aspect of our study findings is that, in a substantial number of NPO orders, the indicated interventions were not performed for seemingly avoidable reasons. These issues may be attributable to clinicians’ preemptive decisions or lack of knowledge, or inefficiency in the healthcare system. Minimizing anticipatory NPO may carry drawbacks such as delays in interventions, and limited evidence links excessive NPO with clinical outcomes (eg, length of stay, readmission, or death). However, from the patients’ perspective, it is important to be kept fasting only for clinical benefit. Hence, this calls for substantial improvement of NPO practices.
Furthermore, results indicated that the duration of most NPO orders was longer than the minimal duration currently suggested in the literature. Whereas strong evidence suggests that no longer than 2 hours of fasting is generally required for preoperative purposes,8 limited studies have evaluated the required length of NPO orders in imaging studies and procedures,9-11 which comprised most of the orders in the study cohort. For example, in upper endoscopy, 2 small studies suggested fasting for 1 or 2 hours may provide as good visualization as with the conventional 6 to 8 hours of fasting.9,10 In coronary angiography, a retrospective study demonstrated fasting may be unnecessary.11 Due to lack of robust evidence, guidelines for these interventions either do not specify the required length of fasting or have not changed the conventional recommendations for fasting, leading to large variations in fasting policies by institution.6,12 Therefore, more studies are needed to define required length of fasting for those indications and to measure the exact magnitude of excessive fasting in the hospital.
One of the limitations of this study is generalizability because NPO practice may considerably vary by institution as suggested in the literature.4,6,12 Conversely, studies have suggested that excessive fasting exists in other institutions.3,4,13 Thus, this study adds further evidence of the prevalence of suboptimal NPO practice to the literature and provides a benchmark that other institutions can refer to when evaluating their own NPO practice. Another limitation is the assumption that the evidence for minimally required NPO duration can be applied to our patient samples. Specifically, the American Society of Anesthesiologists guideline states that preoperative or preprocedural fasting may need to be longer than 2 hours for 1) patients with comorbidities that can affect gastric emptying or fluid volume such as obesity, diabetes, emergency care, and enteral tube feeding, and 2) patients in whom airway management might be difficult.8 We did not consider these possibilities, and as these conditions are prevalent in medical inpatients, we may be overstating the excessiveness of fasting orders. On the other hand, especially in patients with diabetes, prolonged fasting may cause harm by inducing hypoglycemia.14 Further, no study rigorously evaluated safety of shortening the fasting period for these subsets of patients. Therefore, it is necessary to establish optimal duration of NPO and to improve NPO ordering practice even in these patient subsets.
While more research is needed to define optimal duration of NPO for various interventions and specific subsets of patients and to establish linkage of excessive NPO with clinical outcomes, our data provide insights into immediate actions that can be taken by clinicians to improve NPO practices using our data as a benchmark. First, institutions can establish more robust practice guidelines or institutional protocols for NPO orders. Successful interventions have been reported,15 and breaking the habit of ordering NPO after midnight is certainly possible. We recommend each institution does so by indication, potentially through interdepartmental work groups involving appropriate departments such as radiology, surgery, and medicine. Second, institutional guidelines or protocols can be incorporated in the ordering system to enable appropriate NPO ordering. For example, at our institution, we are modifying the order screens for ultrasound-guided paracentesis and thoracentesis to indicate that NPO is not necessary for these procedures unless sedation is anticipated. We conclude that, at any institution, efforts in improving the NPO practice are urgently warranted to minimize unnecessary fasting.
Disclosures
This publication was supported by Grant Number UL1 TR000135 from the National Center for Advancing Translational Sciences (NCATS). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health. The authors report no financial conflicts of interest.
Frequent and prolonged fasting can lead to patient dissatisfaction and distress.1 It may also cause malnutrition and negatively affect outcomes in high-risk populations such as the elderly.2 Evidence suggests that patients are commonly kept fasting longer than necessary.3,4 However, the extent to which nil per os (NPO) orders are necessary or adhere to evidence-based duration is unknown.
Our study showed half of patients admitted to the general medicine services experienced a period of fasting, and 1 in 4 NPO orders may be avoidable.5 In this study, we aimed to provide action-oriented recommendations by 1) assessing why some interventions did not occur after NPO orders were placed and 2) analyzing NPO orders by indication and comparing them with the best available evidence.
METHOD
This retrospective study was conducted at an academic medical center in the United States. The study protocol was approved by the Mayo Clinic Institutional Review Board.
Detailed data handling and NPO order review processes have been described elsewhere.5 Briefly, we identified 1200 NPO orders of 120 or more minutes’ duration that were written for patients on the general medicine services at our institution in 2013. After blinded duplicate review, we excluded 70 orders written in the intensive care unit or on other services, 24 with unknown indications, 101 primarily indicated for clinical reasons, and 81 that had multiple indications. Consequently, 924 orders indicated for a single intervention (eg, imaging study, procedure, or operation) were included in the main analysis.
We assessed if the indicated intervention was performed. If performed, we recorded the time when the intervention was started. If not performed, we assessed reasons why it was not performed. We also performed exploratory analyses to investigate factors associated with performing the indicated intervention. The variables were 1) NPO starting at midnight, 2) NPO starting within 12 hours of admission, and 3) indication (eg, imaging study, procedure, or operation). We also conducted sensitivity analyses limited to 1 NPO order per patient (N = 673) to assess independence of the orders.
We then further categorized indications for the orders in detail and identified those with a sample size >10. This resulted in 779 orders that were included in the analysis by indication. We reviewed the literature by indication to determine suggested minimally required fasting durations to compare fasting duration in our patients to current evidence-based recommendations.
For descriptive statistics, we used median with interquartile range (IQR) for continuous variables and percentage for discrete variables; chi-square tests were used for comparison of discrete variables. All P values were two-tailed and P < 0.05 was considered significant.
RESULTS
Median length of 924 orders was 12.7 hours (IQR, 10.1-15.7 hours); 190 (20.1%), 577 (62.4%), and 157 (21.0%) orders were indicated for imaging studies, procedures, and operations, respectively. NPO started at midnight in 662 (71.6%) and within 12 hours of admission in 210 (22.7%) orders.
The indicated interventions were not performed in 183 (19.8%) orders, mostly as a result of a change in plan (75/183, 41.0%) or scheduling barriers (43/183, 23.5%). Plan changes occurred when, for example, input from a consulting service was obtained or the supervising physician decided not to pursue the intervention. Scheduling barriers included slots being unavailable and conflicts with other tasks/tests. Notably, only in 1 of 183 (0.5%) orders, the intervention was cancelled because the patient ate (Table 1).
Table 1
NPO orders starting at midnight were associated with higher likelihood of indicated interventions being performed (546/662, 82.5% vs. 195/262, 74.4%; P = 0.006), as were NPO orders starting more than 12 hours after admission (601/714, 84.2% vs. 140/210, 66.7%; P < 0.001). Imaging studies were more likely to be performed than procedures or operations (170/190, 89.5% vs. 452/577, 78.3% vs. 119/157, 75.8%; P = 0.001). These results were unchanged when the analyses were limited to 1 order per patient.
When analyzed by indication, the median durations of NPO orders ranged from 8.3 hours in kidney ultrasound to 13.9 hours in upper endoscopy. These were slightly shortened, most by 1 to 2 hours, when the duration was calculated from start of the order to initiation of the intervention. The literature review identified, for most indications, that the minimally required length of NPO were 2 to 4 hours, generally 6 to 8 hours shorter than the median NPO length in this study sample. Furthermore, for indications such as computed tomography with intravenous contrast and abdominal ultrasound, the literature suggested NPO may be unnecessary (Table 2).6-9,16-30
Table 2
DISCUSSION
We analyzed a comprehensive set of NPO orders written for interventions in medical inpatients at an academic medical center. NPO started at midnight in 71.6% of the analyzed orders. In 1 in 5 NPO orders, the indicated intervention was not performed largely due to a change in plan or scheduling barriers. In most NPO orders in which the indicated interventions were performed, patients were kept fasting either unnecessarily or much longer than needed. This study is the first of its kind in evaluating NPO-ordering practices across multiple indications and comparing them with the best available evidence.
These results suggest current NPO practice in the hospital is suboptimal, and limited literature measures the magnitude of this issue.6,7 An important aspect of our study findings is that, in a substantial number of NPO orders, the indicated interventions were not performed for seemingly avoidable reasons. These issues may be attributable to clinicians’ preemptive decisions or lack of knowledge, or inefficiency in the healthcare system. Minimizing anticipatory NPO may carry drawbacks such as delays in interventions, and limited evidence links excessive NPO with clinical outcomes (eg, length of stay, readmission, or death). However, from the patients’ perspective, it is important to be kept fasting only for clinical benefit. Hence, this calls for substantial improvement of NPO practices.
Furthermore, results indicated that the duration of most NPO orders was longer than the minimal duration currently suggested in the literature. Whereas strong evidence suggests that no longer than 2 hours of fasting is generally required for preoperative purposes,8 limited studies have evaluated the required length of NPO orders in imaging studies and procedures,9-11 which comprised most of the orders in the study cohort. For example, in upper endoscopy, 2 small studies suggested fasting for 1 or 2 hours may provide as good visualization as with the conventional 6 to 8 hours of fasting.9,10 In coronary angiography, a retrospective study demonstrated fasting may be unnecessary.11 Due to lack of robust evidence, guidelines for these interventions either do not specify the required length of fasting or have not changed the conventional recommendations for fasting, leading to large variations in fasting policies by institution.6,12 Therefore, more studies are needed to define required length of fasting for those indications and to measure the exact magnitude of excessive fasting in the hospital.
One of the limitations of this study is generalizability because NPO practice may considerably vary by institution as suggested in the literature.4,6,12 Conversely, studies have suggested that excessive fasting exists in other institutions.3,4,13 Thus, this study adds further evidence of the prevalence of suboptimal NPO practice to the literature and provides a benchmark that other institutions can refer to when evaluating their own NPO practice. Another limitation is the assumption that the evidence for minimally required NPO duration can be applied to our patient samples. Specifically, the American Society of Anesthesiologists guideline states that preoperative or preprocedural fasting may need to be longer than 2 hours for 1) patients with comorbidities that can affect gastric emptying or fluid volume such as obesity, diabetes, emergency care, and enteral tube feeding, and 2) patients in whom airway management might be difficult.8 We did not consider these possibilities, and as these conditions are prevalent in medical inpatients, we may be overstating the excessiveness of fasting orders. On the other hand, especially in patients with diabetes, prolonged fasting may cause harm by inducing hypoglycemia.14 Further, no study rigorously evaluated safety of shortening the fasting period for these subsets of patients. Therefore, it is necessary to establish optimal duration of NPO and to improve NPO ordering practice even in these patient subsets.
While more research is needed to define optimal duration of NPO for various interventions and specific subsets of patients and to establish linkage of excessive NPO with clinical outcomes, our data provide insights into immediate actions that can be taken by clinicians to improve NPO practices using our data as a benchmark. First, institutions can establish more robust practice guidelines or institutional protocols for NPO orders. Successful interventions have been reported,15 and breaking the habit of ordering NPO after midnight is certainly possible. We recommend each institution does so by indication, potentially through interdepartmental work groups involving appropriate departments such as radiology, surgery, and medicine. Second, institutional guidelines or protocols can be incorporated in the ordering system to enable appropriate NPO ordering. For example, at our institution, we are modifying the order screens for ultrasound-guided paracentesis and thoracentesis to indicate that NPO is not necessary for these procedures unless sedation is anticipated. We conclude that, at any institution, efforts in improving the NPO practice are urgently warranted to minimize unnecessary fasting.
Disclosures
This publication was supported by Grant Number UL1 TR000135 from the National Center for Advancing Translational Sciences (NCATS). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health. The authors report no financial conflicts of interest.
References
1. Carey SK, Conchin S, Bloomfield-Stone S. A qualitative study into the impact of fasting within a large tertiary hospital in Australia - the patients’ perspective. J Clin Nurs. 2015;24:1946-1954. PubMed
2. Kyriakos G, Calleja-Fernández A, Ávila-Turcios D, Cano-Rodríguez I, Ballesteros Pomar MD, Vidal-Casariego A. Prolonged fasting with fluid therapy is related to poorer outcomes in medical patients. Nutr Hosp. 2013;28:1710-1716. PubMed
3. Rycroft-Malone J, Seers K, Crichton N, et al. A pragmatic cluster randomised trial evaluating three implementation interventions. Implement Sci. 2012;7:80. PubMed
4. Breuer JP, Bosse G, Seifert S, et al. Pre-operative fasting: a nationwide survey of German anaesthesia departments. Acta Anaesthesiol Scand. 2010;54:313-320. PubMed
5. Sorita A, Thongprayoon C, Ahmed A, et al. Frequency and appropriateness of fasting orders in the hospital. Mayo Clin Proc. 2015;90:1225-1232. PubMed
6. Lee BY, Ok JJ, Abdelaziz Elsayed AA, Kim Y, Han DH. Preparative fasting for contrast-enhanced CT: reconsideration. Radiology. 2012;263:444-450. PubMed
7. Manchikanti L, Malla Y, Wargo BW, Fellows B. Preoperative fasting before interventional techniques: is it necessary or evidence-based? Pain Physician. 2011;14:459-467. PubMed
8. American Society of Anesthesiologists Committee. Practice guidelines for preoperative fasting and the use of pharmacologic agents to reduce the risk of pulmonary aspiration: application to healthy patients undergoing elective procedures: an updated report by the American Society of Anesthesiologists Committee on Standards and Practice Parameters. Anesthesiology. 2011;114:495-511. PubMed
9. Koeppe AT, Lubini M, Bonadeo NM, Moraes I Jr, Fornari F. Comfort, safety and quality of upper gastrointestinal endoscopy after 2 hours fasting: a randomized controlled trial. BMC Gastroenterol. 2013;13:158. PubMed
10. De Silva AP, Amarasiri L, Liyanage MN, Kottachchi D, Dassanayake AS, de Silva HJ. One-hour fast for water and six-hour fast for solids prior to endoscopy provides good endoscopic vision and results in minimum patient discomfort. J Gastroenterol Hepatol. 2009;24:1095-1097. PubMed
11. Hamid T, Aleem Q, Lau Y, et al. Pre-procedural fasting for coronary interventions: is it time to change practice? Heart. 2014;100:658-661. PubMed
12. Ahmed SU, Tonidandel W, Trella J, Martin NM, Chang Y. Peri-procedural protocols for interventional pain management techniques: a survey of US pain centers. Pain Physician. 2005;8:181-185. PubMed
13. Franklin GA, McClave SA, Hurt RT, et al. Physician-delivered malnutrition: why do patients receive nothing by mouth or a clear liquid diet in a university hospital setting? JPEN J Parenter Enteral Nutr. 2011;35:337-342. PubMed
14. Aldasouqi S, Sheikh A, Klosterman P, et al. Hypoglycemia in patients with diabetes who are fasting for laboratory blood tests: the Cape Girardeau Hypoglycemia En Route Prevention Program. Postgrad Med. 2013;125:136-143. PubMed
15. Aguilar-Nascimento JE, Salomão AB, Caporossi C, Diniz BN. Clinical benefits after the implementation of a multimodal perioperative protocol in elderly patients. Arq Gastroenterol. 2010;47:178-183. PubMed
16. Hilberath JN, Oakes DA, Shernan SK, Bulwer BE, D’Ambra MN, Eltzschig HK. Safety of transesophageal echocardiography. J Am Soc Echocardiogr. 2010;23: 1115-1127. PubMed
17. Hahn RT, Abraham T, Adams MS, et al. Guidelines for performing a comprehensive transesophageal echocardiographic examination: recommendations from the American Society of Echocardiography and the Society of Cardiovascular Anesthesiologists. J Am Soc Echocardiogr. 2013;26:921-964. PubMed
18. Sinan T, Leven H, Sheikh M. Is fasting a necessary preparation for abdominal ultrasound? BMC Med Imaging. 2003;3:1. PubMed
19. Garcia DA, Froes TR. Importance of fasting in preparing dogs for abdominal ultrasound examination of specific organs. J Small Anim Pract. 2014;55:630-634. PubMed
20. Kidney ultrasound. The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. Health Library, Johns Hopkins Medicine. Available at: http://www.hopkinsmedicine.org/healthlibrary/test_procedures/urology/kidney_ultrasound_92,P07709/. Accessed August 17, 2015.
21. Surasi DS, Bhambhvani P, Baldwin JA, Almodovar SE, O’Malley JP. 18F-FDG PET and PET/CT patient preparation: a review of the literature. J Nucl Med Technol. 2014;42:5-13. PubMed
22. Kang SH, Hyun JJ. Preparation and patient evaluation for safe gastrointestinal endoscopy. Clin Endosc. 2013;46:212-218. PubMed
23. Smith I, Kranke P, Murat I, et al. Perioperative fasting in adults and children: guidelines from the European Society of Anaesthesiology. Eur J Anaesthesiol. 2011;28:556-569. PubMed
24. ASGE Standards of Practice Committee, Saltzman JR, Cash BD, Pasha SF, et al. Bowel preparation before colonoscopy. Gastrointest Endosc. 2015;81:781-794. PubMed
25. Hassan C, Bretthauer M, Kaminski MF, et al; European Society of Gastrointestinal Endoscopy. Bowel preparation for colonoscopy: European Society of Gastrointestinal Endoscopy (ESGE) guideline. Endoscopy. 2013;45:142-150. PubMed
26. Du Rand IA, Blaikley J, Booton R, et al; British Thoracic Society Bronchoscopy Guideline Group. British Thoracic Society guideline for diagnostic flexible bronchoscopy in adults: accredited by NICE. Thorax. 2013;68(suppl 1):i1-i44. PubMed
27. Thoracentesis. The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. Health Library, Johns Hopkins Medicine. Available at: http://www.hopkinsmedicine.org/healthlibrary/test_procedures/pulmonary/thoracentesis_92,P07761/. Accessed August 18, 2015.
28. Runyon BA. Diagnostic and therapeutic abdominal paracentesis. UpToDate. Available at: http://www.uptodate.com/contents/diagnostic-and-therapeutic-abdominal-paracentesis. Published February 18, 2014. Accessed August 18, 2015.
29. Granata A, Fiorini F, Andrulli S, et al. Doppler ultrasound and renal artery stenosis: An overview. J Ultrasound. 2009;12:133-143. PubMed
30. Gerhard-Herman M, Gardin JM, Jaff M, et al. Guidelines for noninvasive vascular laboratory testing: a report from the American Society of Echocardiography and the Society for Vascular Medicine and Biology. Vasc Med. 2006;11:183-200. PubMed
References
1. Carey SK, Conchin S, Bloomfield-Stone S. A qualitative study into the impact of fasting within a large tertiary hospital in Australia - the patients’ perspective. J Clin Nurs. 2015;24:1946-1954. PubMed
2. Kyriakos G, Calleja-Fernández A, Ávila-Turcios D, Cano-Rodríguez I, Ballesteros Pomar MD, Vidal-Casariego A. Prolonged fasting with fluid therapy is related to poorer outcomes in medical patients. Nutr Hosp. 2013;28:1710-1716. PubMed
3. Rycroft-Malone J, Seers K, Crichton N, et al. A pragmatic cluster randomised trial evaluating three implementation interventions. Implement Sci. 2012;7:80. PubMed
4. Breuer JP, Bosse G, Seifert S, et al. Pre-operative fasting: a nationwide survey of German anaesthesia departments. Acta Anaesthesiol Scand. 2010;54:313-320. PubMed
5. Sorita A, Thongprayoon C, Ahmed A, et al. Frequency and appropriateness of fasting orders in the hospital. Mayo Clin Proc. 2015;90:1225-1232. PubMed
6. Lee BY, Ok JJ, Abdelaziz Elsayed AA, Kim Y, Han DH. Preparative fasting for contrast-enhanced CT: reconsideration. Radiology. 2012;263:444-450. PubMed
7. Manchikanti L, Malla Y, Wargo BW, Fellows B. Preoperative fasting before interventional techniques: is it necessary or evidence-based? Pain Physician. 2011;14:459-467. PubMed
8. American Society of Anesthesiologists Committee. Practice guidelines for preoperative fasting and the use of pharmacologic agents to reduce the risk of pulmonary aspiration: application to healthy patients undergoing elective procedures: an updated report by the American Society of Anesthesiologists Committee on Standards and Practice Parameters. Anesthesiology. 2011;114:495-511. PubMed
9. Koeppe AT, Lubini M, Bonadeo NM, Moraes I Jr, Fornari F. Comfort, safety and quality of upper gastrointestinal endoscopy after 2 hours fasting: a randomized controlled trial. BMC Gastroenterol. 2013;13:158. PubMed
10. De Silva AP, Amarasiri L, Liyanage MN, Kottachchi D, Dassanayake AS, de Silva HJ. One-hour fast for water and six-hour fast for solids prior to endoscopy provides good endoscopic vision and results in minimum patient discomfort. J Gastroenterol Hepatol. 2009;24:1095-1097. PubMed
11. Hamid T, Aleem Q, Lau Y, et al. Pre-procedural fasting for coronary interventions: is it time to change practice? Heart. 2014;100:658-661. PubMed
12. Ahmed SU, Tonidandel W, Trella J, Martin NM, Chang Y. Peri-procedural protocols for interventional pain management techniques: a survey of US pain centers. Pain Physician. 2005;8:181-185. PubMed
13. Franklin GA, McClave SA, Hurt RT, et al. Physician-delivered malnutrition: why do patients receive nothing by mouth or a clear liquid diet in a university hospital setting? JPEN J Parenter Enteral Nutr. 2011;35:337-342. PubMed
14. Aldasouqi S, Sheikh A, Klosterman P, et al. Hypoglycemia in patients with diabetes who are fasting for laboratory blood tests: the Cape Girardeau Hypoglycemia En Route Prevention Program. Postgrad Med. 2013;125:136-143. PubMed
15. Aguilar-Nascimento JE, Salomão AB, Caporossi C, Diniz BN. Clinical benefits after the implementation of a multimodal perioperative protocol in elderly patients. Arq Gastroenterol. 2010;47:178-183. PubMed
16. Hilberath JN, Oakes DA, Shernan SK, Bulwer BE, D’Ambra MN, Eltzschig HK. Safety of transesophageal echocardiography. J Am Soc Echocardiogr. 2010;23: 1115-1127. PubMed
17. Hahn RT, Abraham T, Adams MS, et al. Guidelines for performing a comprehensive transesophageal echocardiographic examination: recommendations from the American Society of Echocardiography and the Society of Cardiovascular Anesthesiologists. J Am Soc Echocardiogr. 2013;26:921-964. PubMed
18. Sinan T, Leven H, Sheikh M. Is fasting a necessary preparation for abdominal ultrasound? BMC Med Imaging. 2003;3:1. PubMed
19. Garcia DA, Froes TR. Importance of fasting in preparing dogs for abdominal ultrasound examination of specific organs. J Small Anim Pract. 2014;55:630-634. PubMed
20. Kidney ultrasound. The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. Health Library, Johns Hopkins Medicine. Available at: http://www.hopkinsmedicine.org/healthlibrary/test_procedures/urology/kidney_ultrasound_92,P07709/. Accessed August 17, 2015.
21. Surasi DS, Bhambhvani P, Baldwin JA, Almodovar SE, O’Malley JP. 18F-FDG PET and PET/CT patient preparation: a review of the literature. J Nucl Med Technol. 2014;42:5-13. PubMed
22. Kang SH, Hyun JJ. Preparation and patient evaluation for safe gastrointestinal endoscopy. Clin Endosc. 2013;46:212-218. PubMed
23. Smith I, Kranke P, Murat I, et al. Perioperative fasting in adults and children: guidelines from the European Society of Anaesthesiology. Eur J Anaesthesiol. 2011;28:556-569. PubMed
24. ASGE Standards of Practice Committee, Saltzman JR, Cash BD, Pasha SF, et al. Bowel preparation before colonoscopy. Gastrointest Endosc. 2015;81:781-794. PubMed
25. Hassan C, Bretthauer M, Kaminski MF, et al; European Society of Gastrointestinal Endoscopy. Bowel preparation for colonoscopy: European Society of Gastrointestinal Endoscopy (ESGE) guideline. Endoscopy. 2013;45:142-150. PubMed
26. Du Rand IA, Blaikley J, Booton R, et al; British Thoracic Society Bronchoscopy Guideline Group. British Thoracic Society guideline for diagnostic flexible bronchoscopy in adults: accredited by NICE. Thorax. 2013;68(suppl 1):i1-i44. PubMed
27. Thoracentesis. The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. Health Library, Johns Hopkins Medicine. Available at: http://www.hopkinsmedicine.org/healthlibrary/test_procedures/pulmonary/thoracentesis_92,P07761/. Accessed August 18, 2015.
28. Runyon BA. Diagnostic and therapeutic abdominal paracentesis. UpToDate. Available at: http://www.uptodate.com/contents/diagnostic-and-therapeutic-abdominal-paracentesis. Published February 18, 2014. Accessed August 18, 2015.
29. Granata A, Fiorini F, Andrulli S, et al. Doppler ultrasound and renal artery stenosis: An overview. J Ultrasound. 2009;12:133-143. PubMed
30. Gerhard-Herman M, Gardin JM, Jaff M, et al. Guidelines for noninvasive vascular laboratory testing: a report from the American Society of Echocardiography and the Society for Vascular Medicine and Biology. Vasc Med. 2006;11:183-200. PubMed
Address for Correspondence and Reprint Requests: Deanne T. Kashiwagi, MD, Mayo Clinic, Division of Hospital Internal Medicine, 200 First Street SW, Rochester, MN 55905; Telephone: 507-255-8715; Fax: 507-255-9189; Email: [email protected]
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Consolidated Pubs: Do Not Show Source Publication Logo
Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.
METHODS
In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.
Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.
RESULTS
The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).
Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.
Reported he or she orders unnecessary routine labs, no. (%)
96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily
47 (49.0)
23 times/week
44 (45.8)
1 time/week or less
5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs
105 (90.5)
Lack of cost transparency of labs
100 (86.2)
Discomfort with diagnostic uncertainty
96 (82.8)
Concern that the attending will ask for the data and I will not have it
88 (75.9)
Lack of role modeling of cost conscious care
78 (67.2)
Lack of cost conscious culture at our institution
76 (65.5)
Lack of experience
72 (62.1)
Ease of ordering repeating labs in EHR
60 (51.7)
Fear of litigation from missed diagnosis related to lab data
44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories*
Representative Quotes
IM, n = 76, No. (%)
GS, n = 21, No. (%)
NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.
Cost transparency
Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship.
31 (40.8)
7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain
Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day.
23 (30.2)
7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support
When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider.
16 (21.1)
6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula
Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue.
12 (15.8)
4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements
Make it easier to get labs later [in the day]
6 (7.9)
2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.
DISCUSSION
A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]
Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.
Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.
Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.
This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.
In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.
Acknowledgements
The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.
Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.
Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med.2011;86(1):139–145.
Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One.2013;8(11):e78962.
Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med.2011;171(18):1646–1653.
Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med.2013;8(9):486–492.
Korenstein D. Charting the route to high‐value care the role of medical education. JAMA.2016;314(22):2359–2361.
Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA.2014;312(22):2385–2393.
Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med.2014;174(10):1640–1648.
Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med.2015;30(9):1286–1293.
Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med.2015;30(9):1294–1298.
Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods.1967;5:1–10.
Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA.2012;308(13):1329–1330.
Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med.2016;11(4):297–302.
Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med.2015;30(6):835–842.
Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med.2016;11(1):65–76.
Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA.2015;314(22):2349–2350.
Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.
METHODS
In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.
Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.
RESULTS
The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).
Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.
Reported he or she orders unnecessary routine labs, no. (%)
96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily
47 (49.0)
23 times/week
44 (45.8)
1 time/week or less
5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs
105 (90.5)
Lack of cost transparency of labs
100 (86.2)
Discomfort with diagnostic uncertainty
96 (82.8)
Concern that the attending will ask for the data and I will not have it
88 (75.9)
Lack of role modeling of cost conscious care
78 (67.2)
Lack of cost conscious culture at our institution
76 (65.5)
Lack of experience
72 (62.1)
Ease of ordering repeating labs in EHR
60 (51.7)
Fear of litigation from missed diagnosis related to lab data
44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories*
Representative Quotes
IM, n = 76, No. (%)
GS, n = 21, No. (%)
NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.
Cost transparency
Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship.
31 (40.8)
7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain
Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day.
23 (30.2)
7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support
When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider.
16 (21.1)
6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula
Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue.
12 (15.8)
4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements
Make it easier to get labs later [in the day]
6 (7.9)
2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.
DISCUSSION
A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]
Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.
Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.
Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.
This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.
In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.
Acknowledgements
The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.
Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.
Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.
METHODS
In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.
Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.
RESULTS
The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).
Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.
Reported he or she orders unnecessary routine labs, no. (%)
96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily
47 (49.0)
23 times/week
44 (45.8)
1 time/week or less
5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs
105 (90.5)
Lack of cost transparency of labs
100 (86.2)
Discomfort with diagnostic uncertainty
96 (82.8)
Concern that the attending will ask for the data and I will not have it
88 (75.9)
Lack of role modeling of cost conscious care
78 (67.2)
Lack of cost conscious culture at our institution
76 (65.5)
Lack of experience
72 (62.1)
Ease of ordering repeating labs in EHR
60 (51.7)
Fear of litigation from missed diagnosis related to lab data
44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories*
Representative Quotes
IM, n = 76, No. (%)
GS, n = 21, No. (%)
NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.
Cost transparency
Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship.
31 (40.8)
7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain
Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day.
23 (30.2)
7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support
When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider.
16 (21.1)
6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula
Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue.
12 (15.8)
4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements
Make it easier to get labs later [in the day]
6 (7.9)
2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.
DISCUSSION
A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]
Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.
Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.
Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.
This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.
In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.
Acknowledgements
The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.
Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.
References
Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med.2011;86(1):139–145.
Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One.2013;8(11):e78962.
Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med.2011;171(18):1646–1653.
Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med.2013;8(9):486–492.
Korenstein D. Charting the route to high‐value care the role of medical education. JAMA.2016;314(22):2359–2361.
Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA.2014;312(22):2385–2393.
Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med.2014;174(10):1640–1648.
Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med.2015;30(9):1286–1293.
Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med.2015;30(9):1294–1298.
Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods.1967;5:1–10.
Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA.2012;308(13):1329–1330.
Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med.2016;11(4):297–302.
Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med.2015;30(6):835–842.
Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med.2016;11(1):65–76.
Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA.2015;314(22):2349–2350.
References
Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med.2011;86(1):139–145.
Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One.2013;8(11):e78962.
Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med.2011;171(18):1646–1653.
Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med.2013;8(9):486–492.
Korenstein D. Charting the route to high‐value care the role of medical education. JAMA.2016;314(22):2359–2361.
Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA.2014;312(22):2385–2393.
Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med.2014;174(10):1640–1648.
Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med.2015;30(9):1286–1293.
Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med.2015;30(9):1294–1298.
Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods.1967;5:1–10.
Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA.2012;308(13):1329–1330.
Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med.2016;11(4):297–302.
Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med.2015;30(6):835–842.
Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med.2016;11(1):65–76.
Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA.2015;314(22):2349–2350.
Address for correspondence and reprint requests: Mina S. Sedrak, MD, MS, 1500 E. Duarte Road, Duarte, CA 91010; Telephone: 626‐471‐9200; Fax: 626‐301‐8233; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Sitting while interacting with patients is standard in the outpatient setting and encouraged in the inpatient setting as a best practice.[1, 2] Michael W. Kahn defined etiquette‐based medicine as a set of easily taught behaviors that demonstrate respect for the patient; sitting at the bedside is included.[1] A prominent healthcare consulting group also recommends that physicians and nurses sit at the bedside, claiming that the patient will estimate you were in the room 3 times longer.[3] Previous studies suggest patients may perceive physicians who sit at the bedside as more compassionate and as spending more time with them, and may perceive the overall interaction as more positive when the physician sits.[4, 5, 6] Two small studies found that patients perceived the physician as having spent more time with them if he or she sat rather than stood.[5, 6] A study in the emergency department found no effect of posture on patient perception of physician communication skills, and a study of a single attending neurosurgeon found that patients reported a better understanding of their condition when the physician sat.[5, 6] The effect of physician posture on hospitalist physician‐patient communication has not been previously studied. Despite evidence that sitting in the inpatient setting may improve physician‐patient communication, studies suggest that physicians rarely sit at the bedside of inpatients.[7, 8]
We conducted a cluster‐randomized trial of the impact of hospitalist physician posture during morning rounds. We hypothesized that patients whose physician sat rather than stood would perceive that their physician spent more time with them and would rate the physician's communication skills more highly. We also hypothesized that sitting would not prolong the length of the patient‐physician encounter.
PATIENTS AND METHODS
We conducted a cluster‐randomized clinical trial with a crossover component randomizing physicians on the order of sit/stand within a consecutive 7‐day workweek. We enrolled patients being cared for by attending hospitalists on a resident‐uncovered general internal medicine service in an academic tertiary care hospital. We also enrolled the hospitalists and collected demographics and practice information. Wall‐mounted folding chairs (Figure 1) were installed in all rooms on two 28‐bed units for use by physicians. Eligible patients were newly admitted or transferred from the intensive care unit between June 2014 and June 2015, English speaking, and adults who consented to their own medical care. Physicians were randomly assigned to sit or stand during morning rounds for the first 3 days of their workweek. The last 4 days they provided care using the other posture. Blocks of 4 weeks were used to randomize the sit/stand order.
Figure 1
Chair used in the study.
We measured the length of the physician‐patient interaction, asked both the physician and the patient to estimate the length of the interaction, and administered a written survey to the patient with questions about the physician's communication skills. Research assistants timed the interaction from outside the room and entered the room to consent patients and administer the survey after the physician departed. Survey questions were modeled on the physician communication questions from the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. We aggregated all answers other than the most positive answer because HCAHPS questions are analyzed according to a top box methodology. Adherence to the intervention was measured by asking the physician whether he or she actually sat or stood for each interaction. We administered a survey to physicians to collect demographics and feedback.
We estimated descriptive statistics for physician and patient participants using cross‐tabs and means. To estimate associations, we used logistic and linear regression that employed cluster‐adjusted t statistics and clustered patients within providers. This method optimizes estimation of standard errors (and corresponding confidence intervals and P values) when the number of clusters is small (16 physicians).[9] For our primary analysis, we analyzed as randomized using an intent‐to‐treat approach. In other words, those assigned to the standing group were analyzed in the standing group even if they actually sat (and vice versa). In a sensitivity analysis we used the same methods to analyze the data according actual provider posture as reported by the physician and not as randomized. We calculated the mean and range of the number of patients seen by physicians. We compared estimates of time spent between patients and providers and patients' satisfaction according to provider posture. We complied with the Consolidated Standards of Reporting Trials 2010 guidelines.[10] Our institutional review board approved this project. All participants provided written consent.
RESULTS
All 17 hospitalists attending on the service consented to participate; 1 did not see any patients involved in the study and was removed from the analysis. Sixty‐nine percent were female and 81% had been in practice for 3 years or less at the time of study enrollment; 94% reported standing when assigned to stand and 83% reported sitting when assigned to sit. We found 31% of physicians reported they routinely sat before participating in the study, and 81% said they would sit more after the study; this result approached statistical significance (exact McNemar P = 0.06). Of the 11 physicians who reported not routinely sitting before the study all, 7 cited not having a place to sit as a reason for not sitting. Other rationale provided included being too short to see the patient if seated, believing rounds would take more time if seated, and concerns about contact precautions. Comments in the postintervention survey regarding why providers planned to sit more centered around themes of having chairs available, thinking that sitting improves communication, and thinking that patients prefer providers to sit.
Two hundred eleven patients were assessed for eligibility. Fifty‐two were excluded (27 did not meet inclusion criteria and 25 declined to participate), leaving 159 participating patients. Seven patient‐physician pairs were inadvertently assigned the wrong intervention but were analyzed as randomized. There were no demographic differences between patient groups (Table 1). Physicians participating in the study saw an average of 13 study patients (range, 118) during the study. Mean time spent in the patient's room during rounds was 12:00 minutes for seated physicians and 12:10 for standing physicians (P = 0.84). Regardless of provider posture, patients overestimated the amount of time their physician spent in the room (mean difference 4:10 minutes, P = 0.01). Patients' estimates of the time the physician spent did not vary by posture (16:00 minutes for seated, 16:19 for standing, P = 0.86).
Patient Characteristics
Patients Seen by Seated Physician, N = 66
Patients Seen by Standing Physician, N = 93
P Value
n
%
n
%
Patient age, y
1839
16
25.4
25
27.5
0.59
4059
17
27.0
30
33.0
60+
30
47.6
36
39.6
Gender
Male
32
49.2
43
46.2
0.71
Female
33
50.8
50
53.8
Ethnicity
Caucasian
54
84.4
67
73.6
0.24
Asian or Pacific Islander
3
4.7
5
5.5
Other
7
10.9
19
20.9
Patients whose physician sat on rounds were statistically significantly more likely to choose the answer always to the questions regarding their physician listening carefully to them (P = 0.02) and explaining things in a way that was easy to understand (P = 0.05, Table 2). There was no difference in the patients' response to questions about the physician interrupting the patient when talking or treating them with courtesy and respect. Nearly all patients chose just right when asked to rate the amount of time their physician had spent with them on rounds (Table 2). The results of our sensitivity analysis that classified physicians according to their actual posture yielded different results; none of the findings in that analysis including questions regarding the physician listening carefully or explaining things in a way that was easy to understand were statistically significant (see Supporting Information, Appendix 1, in the online version of this article).
Patient Perceptions of Physician Communication
Patients Seen by Seated Physician, N = 66
Patients Seen by Standing Physician, N = 93
P Value
n
%
n
%
NOTE: All variables missing <5%. *Missing 6.9%.
Patient perception of physician communication on that day's rounds
Today on rounds, how often did this physician.
Explain things in a way that was easy to understand?
Never, sometimes, or usually
7
10.9
22
23.9
0.05
Always
57
89.1
71
76.1
Listen carefully to you?
Never, sometimes, or usually
4
6.1
19
20.4
0.02
Always
62
93.4
74
79.6
Interrupt you when you were talking?
Always, sometimes, or usually
4
6.5
9
10
0.46
Never
58
93.6
81
90
Treat you with courtesy and respect?
Never, sometimes, or usually
0
0
7
7.6
Not estimable
Always
63
100
85
92.4
Please rate the amount of time this physician spent with you today during morning rounds.
Too little
1
1.6
3
3.5
0.41
Just right
63
98.4
84
96.5
Did you have any important questions or concerns about your care that you did not bring up with this doctor today?*
Yes
4
6.6
9
10.3
0.26
No
57
94.4
78
89.7
DISCUSSION
In our study involving general medicine inpatients cared for by academic hospitalists, physicians did not spend more time in the room when seated, and were willing to adopt this practice. Patients perceived that seated compared to standing physicians listened more carefully and explained things in a way that was easy to understand when analyzed using an intent‐to‐treat approach. Patients did not perceive that seated physicians spent more time with them than standing physicians. To our knowledge, this is the first study showing the effects of hospitalist rounding posture on patient experience.
Our finding that patients rated seated physicians more highly on listening carefully and explaining things well indicates that training hospitalists to sit at the bedside may ultimately improve patient satisfaction. Our findings suggest seated interaction may improve satisfaction with communication without increasing time burden on physicians. However, given that these findings were not statistically significant when we analyzed our data according to actual behavior, larger studies should verify the impact of physician posture on patient experience.
Previous studies found that a minority of physicians sit in the inpatient setting, but did not study barriers to sitting while on rounds.[7, 8] A majority of physicians in our study sat when instructed to do so and when chairs were provided, and over 80% of physicians in our study said they planned to continue sitting while on rounds after the study was complete. A lack of chairs may be a major barrier to physicians adopting this facet of etiquette‐based medicine, and institutions wishing to promote this practice should consider providing chairs. Written comments from physician participants suggest physicians who are introduced to this practice enjoy sitting and think it improves physician‐patient communication. Further studies are needed to test our assumption that physicians continue sitting when chairs are provided.
Our work differs from previous studies. Johnson et al. studied interactions in the emergency room with a mean length of 8.6 minutes,[5] and Swayden et al. studied postoperative visits by a single neurosurgeon with a mean length of about 1 minute.[6] One explanation for the lack of a difference in time spent by posture might be that an average visit time of 12 minutes passes a threshold where patients make more accurate estimates of visit length or where factors other than posture more strongly influence perceptions of duration.
Limitations of our study include the relatively small sample size, single location, and limitation to English‐speaking patients able to consent themselves. Reasons for the limited sample size include that chairs were only installed in 2 units, and not all patients on the unit were under the care of participating physicians. Physician subjects were not blinded to their interactions being timed or to the fact that patients were surveyed about their communication skills. It is possible that factors that may have affected patients' responses such as severity of illness, number of consultants involved in their care, or prior experiences in the healthcare system were not equally distributed between our 2 groups. Additionally, our use of questions similar to those used in the HCAHPS instrument is not compliant with Centers for Medicare and Medicaid Services (CMS) policy. We caution others against using questions that might invalidate their hospital's participation in CMS payment programs.
Our study was limited to rounds involving 1 physician; our practice is that in a larger team the presenting member is encouraged to sit and others sit if there are additional chairs. Best practices on a teaching service are unclear and could be the subject of further study. The longer‐term sustainability of the practice of sitting on rounds is unclear. However, our physician subjects reported that they plan to continue to sit after the study, and we have shared the results with physicians in order to provide them with evidence supporting this practice. Not having a place to sit and thinking that sitting increases the amount of time spent on rounds were concerns provided in our preintervention survey, and we believe our study addresses these concerns.
Our study demonstrates the effects of a simple intervention on patient satisfaction without increasing burden on providers. Sitting at the bedside does not impact the amount of time spent with the patient, but may improve the patient's perception of the physician's communication skills and thus impact the patient experience. This simple intervention could improve patient satisfaction at little cost.
Acknowledgements
The authors acknowledge Tom Staiger, MD, UWMC Medical Director, for his assistance with obtaining chairs for this study.
Kahn M. Etiquette‐based medicine. N Engl J Med.2008;358(19):1988–1989.
Sorenson E, Malakouti M, Brown G, Koo J. Enhancing patient satisfaction in dermatology. Am J Clin Dermatol.2015;16:1–4.
The Studer Group. Q21:501–505.
Johnson RL, Sadosty AT, Weaver AL, Goyal DG. To sit or not to sit?Ann Emerg Med.2008;51:188–193.
Swayden KJ, Anderson KK, Connelly LM, Moran JS, McMahon JK, Arnold PM. Effect of sitting vs. standing on perception of provider time at bedside: a pilot study. Patient Educ Couns.2012;86(2):166–171.
Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the Practice of Etiquette‐Based Medicine in the Inpatient Setting. J Gen Intern Med.2013;28(7):908–913.
Block L, Hutzler L, Habicht R, et al. Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter. J Hosp Med.2013; 8:631–634.
Esarey J, Menger A. Practical and effective approaches to dealing with clustered data [unpublished manuscript]. Department of Political Science, Rice University, Houston, TX. Available at: http://jee3.web.rice.edu/cluster‐paper.pdf. Accessed February 29, 2016.
Schulz KF, Altman DG, Moher D. Consort 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med.2010;152(11):726–732.
Sitting while interacting with patients is standard in the outpatient setting and encouraged in the inpatient setting as a best practice.[1, 2] Michael W. Kahn defined etiquette‐based medicine as a set of easily taught behaviors that demonstrate respect for the patient; sitting at the bedside is included.[1] A prominent healthcare consulting group also recommends that physicians and nurses sit at the bedside, claiming that the patient will estimate you were in the room 3 times longer.[3] Previous studies suggest patients may perceive physicians who sit at the bedside as more compassionate and as spending more time with them, and may perceive the overall interaction as more positive when the physician sits.[4, 5, 6] Two small studies found that patients perceived the physician as having spent more time with them if he or she sat rather than stood.[5, 6] A study in the emergency department found no effect of posture on patient perception of physician communication skills, and a study of a single attending neurosurgeon found that patients reported a better understanding of their condition when the physician sat.[5, 6] The effect of physician posture on hospitalist physician‐patient communication has not been previously studied. Despite evidence that sitting in the inpatient setting may improve physician‐patient communication, studies suggest that physicians rarely sit at the bedside of inpatients.[7, 8]
We conducted a cluster‐randomized trial of the impact of hospitalist physician posture during morning rounds. We hypothesized that patients whose physician sat rather than stood would perceive that their physician spent more time with them and would rate the physician's communication skills more highly. We also hypothesized that sitting would not prolong the length of the patient‐physician encounter.
PATIENTS AND METHODS
We conducted a cluster‐randomized clinical trial with a crossover component randomizing physicians on the order of sit/stand within a consecutive 7‐day workweek. We enrolled patients being cared for by attending hospitalists on a resident‐uncovered general internal medicine service in an academic tertiary care hospital. We also enrolled the hospitalists and collected demographics and practice information. Wall‐mounted folding chairs (Figure 1) were installed in all rooms on two 28‐bed units for use by physicians. Eligible patients were newly admitted or transferred from the intensive care unit between June 2014 and June 2015, English speaking, and adults who consented to their own medical care. Physicians were randomly assigned to sit or stand during morning rounds for the first 3 days of their workweek. The last 4 days they provided care using the other posture. Blocks of 4 weeks were used to randomize the sit/stand order.
Figure 1
Chair used in the study.
We measured the length of the physician‐patient interaction, asked both the physician and the patient to estimate the length of the interaction, and administered a written survey to the patient with questions about the physician's communication skills. Research assistants timed the interaction from outside the room and entered the room to consent patients and administer the survey after the physician departed. Survey questions were modeled on the physician communication questions from the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. We aggregated all answers other than the most positive answer because HCAHPS questions are analyzed according to a top box methodology. Adherence to the intervention was measured by asking the physician whether he or she actually sat or stood for each interaction. We administered a survey to physicians to collect demographics and feedback.
We estimated descriptive statistics for physician and patient participants using cross‐tabs and means. To estimate associations, we used logistic and linear regression that employed cluster‐adjusted t statistics and clustered patients within providers. This method optimizes estimation of standard errors (and corresponding confidence intervals and P values) when the number of clusters is small (16 physicians).[9] For our primary analysis, we analyzed as randomized using an intent‐to‐treat approach. In other words, those assigned to the standing group were analyzed in the standing group even if they actually sat (and vice versa). In a sensitivity analysis we used the same methods to analyze the data according actual provider posture as reported by the physician and not as randomized. We calculated the mean and range of the number of patients seen by physicians. We compared estimates of time spent between patients and providers and patients' satisfaction according to provider posture. We complied with the Consolidated Standards of Reporting Trials 2010 guidelines.[10] Our institutional review board approved this project. All participants provided written consent.
RESULTS
All 17 hospitalists attending on the service consented to participate; 1 did not see any patients involved in the study and was removed from the analysis. Sixty‐nine percent were female and 81% had been in practice for 3 years or less at the time of study enrollment; 94% reported standing when assigned to stand and 83% reported sitting when assigned to sit. We found 31% of physicians reported they routinely sat before participating in the study, and 81% said they would sit more after the study; this result approached statistical significance (exact McNemar P = 0.06). Of the 11 physicians who reported not routinely sitting before the study all, 7 cited not having a place to sit as a reason for not sitting. Other rationale provided included being too short to see the patient if seated, believing rounds would take more time if seated, and concerns about contact precautions. Comments in the postintervention survey regarding why providers planned to sit more centered around themes of having chairs available, thinking that sitting improves communication, and thinking that patients prefer providers to sit.
Two hundred eleven patients were assessed for eligibility. Fifty‐two were excluded (27 did not meet inclusion criteria and 25 declined to participate), leaving 159 participating patients. Seven patient‐physician pairs were inadvertently assigned the wrong intervention but were analyzed as randomized. There were no demographic differences between patient groups (Table 1). Physicians participating in the study saw an average of 13 study patients (range, 118) during the study. Mean time spent in the patient's room during rounds was 12:00 minutes for seated physicians and 12:10 for standing physicians (P = 0.84). Regardless of provider posture, patients overestimated the amount of time their physician spent in the room (mean difference 4:10 minutes, P = 0.01). Patients' estimates of the time the physician spent did not vary by posture (16:00 minutes for seated, 16:19 for standing, P = 0.86).
Patient Characteristics
Patients Seen by Seated Physician, N = 66
Patients Seen by Standing Physician, N = 93
P Value
n
%
n
%
Patient age, y
1839
16
25.4
25
27.5
0.59
4059
17
27.0
30
33.0
60+
30
47.6
36
39.6
Gender
Male
32
49.2
43
46.2
0.71
Female
33
50.8
50
53.8
Ethnicity
Caucasian
54
84.4
67
73.6
0.24
Asian or Pacific Islander
3
4.7
5
5.5
Other
7
10.9
19
20.9
Patients whose physician sat on rounds were statistically significantly more likely to choose the answer always to the questions regarding their physician listening carefully to them (P = 0.02) and explaining things in a way that was easy to understand (P = 0.05, Table 2). There was no difference in the patients' response to questions about the physician interrupting the patient when talking or treating them with courtesy and respect. Nearly all patients chose just right when asked to rate the amount of time their physician had spent with them on rounds (Table 2). The results of our sensitivity analysis that classified physicians according to their actual posture yielded different results; none of the findings in that analysis including questions regarding the physician listening carefully or explaining things in a way that was easy to understand were statistically significant (see Supporting Information, Appendix 1, in the online version of this article).
Patient Perceptions of Physician Communication
Patients Seen by Seated Physician, N = 66
Patients Seen by Standing Physician, N = 93
P Value
n
%
n
%
NOTE: All variables missing <5%. *Missing 6.9%.
Patient perception of physician communication on that day's rounds
Today on rounds, how often did this physician.
Explain things in a way that was easy to understand?
Never, sometimes, or usually
7
10.9
22
23.9
0.05
Always
57
89.1
71
76.1
Listen carefully to you?
Never, sometimes, or usually
4
6.1
19
20.4
0.02
Always
62
93.4
74
79.6
Interrupt you when you were talking?
Always, sometimes, or usually
4
6.5
9
10
0.46
Never
58
93.6
81
90
Treat you with courtesy and respect?
Never, sometimes, or usually
0
0
7
7.6
Not estimable
Always
63
100
85
92.4
Please rate the amount of time this physician spent with you today during morning rounds.
Too little
1
1.6
3
3.5
0.41
Just right
63
98.4
84
96.5
Did you have any important questions or concerns about your care that you did not bring up with this doctor today?*
Yes
4
6.6
9
10.3
0.26
No
57
94.4
78
89.7
DISCUSSION
In our study involving general medicine inpatients cared for by academic hospitalists, physicians did not spend more time in the room when seated, and were willing to adopt this practice. Patients perceived that seated compared to standing physicians listened more carefully and explained things in a way that was easy to understand when analyzed using an intent‐to‐treat approach. Patients did not perceive that seated physicians spent more time with them than standing physicians. To our knowledge, this is the first study showing the effects of hospitalist rounding posture on patient experience.
Our finding that patients rated seated physicians more highly on listening carefully and explaining things well indicates that training hospitalists to sit at the bedside may ultimately improve patient satisfaction. Our findings suggest seated interaction may improve satisfaction with communication without increasing time burden on physicians. However, given that these findings were not statistically significant when we analyzed our data according to actual behavior, larger studies should verify the impact of physician posture on patient experience.
Previous studies found that a minority of physicians sit in the inpatient setting, but did not study barriers to sitting while on rounds.[7, 8] A majority of physicians in our study sat when instructed to do so and when chairs were provided, and over 80% of physicians in our study said they planned to continue sitting while on rounds after the study was complete. A lack of chairs may be a major barrier to physicians adopting this facet of etiquette‐based medicine, and institutions wishing to promote this practice should consider providing chairs. Written comments from physician participants suggest physicians who are introduced to this practice enjoy sitting and think it improves physician‐patient communication. Further studies are needed to test our assumption that physicians continue sitting when chairs are provided.
Our work differs from previous studies. Johnson et al. studied interactions in the emergency room with a mean length of 8.6 minutes,[5] and Swayden et al. studied postoperative visits by a single neurosurgeon with a mean length of about 1 minute.[6] One explanation for the lack of a difference in time spent by posture might be that an average visit time of 12 minutes passes a threshold where patients make more accurate estimates of visit length or where factors other than posture more strongly influence perceptions of duration.
Limitations of our study include the relatively small sample size, single location, and limitation to English‐speaking patients able to consent themselves. Reasons for the limited sample size include that chairs were only installed in 2 units, and not all patients on the unit were under the care of participating physicians. Physician subjects were not blinded to their interactions being timed or to the fact that patients were surveyed about their communication skills. It is possible that factors that may have affected patients' responses such as severity of illness, number of consultants involved in their care, or prior experiences in the healthcare system were not equally distributed between our 2 groups. Additionally, our use of questions similar to those used in the HCAHPS instrument is not compliant with Centers for Medicare and Medicaid Services (CMS) policy. We caution others against using questions that might invalidate their hospital's participation in CMS payment programs.
Our study was limited to rounds involving 1 physician; our practice is that in a larger team the presenting member is encouraged to sit and others sit if there are additional chairs. Best practices on a teaching service are unclear and could be the subject of further study. The longer‐term sustainability of the practice of sitting on rounds is unclear. However, our physician subjects reported that they plan to continue to sit after the study, and we have shared the results with physicians in order to provide them with evidence supporting this practice. Not having a place to sit and thinking that sitting increases the amount of time spent on rounds were concerns provided in our preintervention survey, and we believe our study addresses these concerns.
Our study demonstrates the effects of a simple intervention on patient satisfaction without increasing burden on providers. Sitting at the bedside does not impact the amount of time spent with the patient, but may improve the patient's perception of the physician's communication skills and thus impact the patient experience. This simple intervention could improve patient satisfaction at little cost.
Acknowledgements
The authors acknowledge Tom Staiger, MD, UWMC Medical Director, for his assistance with obtaining chairs for this study.
Disclosure: Nothing to report.
Sitting while interacting with patients is standard in the outpatient setting and encouraged in the inpatient setting as a best practice.[1, 2] Michael W. Kahn defined etiquette‐based medicine as a set of easily taught behaviors that demonstrate respect for the patient; sitting at the bedside is included.[1] A prominent healthcare consulting group also recommends that physicians and nurses sit at the bedside, claiming that the patient will estimate you were in the room 3 times longer.[3] Previous studies suggest patients may perceive physicians who sit at the bedside as more compassionate and as spending more time with them, and may perceive the overall interaction as more positive when the physician sits.[4, 5, 6] Two small studies found that patients perceived the physician as having spent more time with them if he or she sat rather than stood.[5, 6] A study in the emergency department found no effect of posture on patient perception of physician communication skills, and a study of a single attending neurosurgeon found that patients reported a better understanding of their condition when the physician sat.[5, 6] The effect of physician posture on hospitalist physician‐patient communication has not been previously studied. Despite evidence that sitting in the inpatient setting may improve physician‐patient communication, studies suggest that physicians rarely sit at the bedside of inpatients.[7, 8]
We conducted a cluster‐randomized trial of the impact of hospitalist physician posture during morning rounds. We hypothesized that patients whose physician sat rather than stood would perceive that their physician spent more time with them and would rate the physician's communication skills more highly. We also hypothesized that sitting would not prolong the length of the patient‐physician encounter.
PATIENTS AND METHODS
We conducted a cluster‐randomized clinical trial with a crossover component randomizing physicians on the order of sit/stand within a consecutive 7‐day workweek. We enrolled patients being cared for by attending hospitalists on a resident‐uncovered general internal medicine service in an academic tertiary care hospital. We also enrolled the hospitalists and collected demographics and practice information. Wall‐mounted folding chairs (Figure 1) were installed in all rooms on two 28‐bed units for use by physicians. Eligible patients were newly admitted or transferred from the intensive care unit between June 2014 and June 2015, English speaking, and adults who consented to their own medical care. Physicians were randomly assigned to sit or stand during morning rounds for the first 3 days of their workweek. The last 4 days they provided care using the other posture. Blocks of 4 weeks were used to randomize the sit/stand order.
Figure 1
Chair used in the study.
We measured the length of the physician‐patient interaction, asked both the physician and the patient to estimate the length of the interaction, and administered a written survey to the patient with questions about the physician's communication skills. Research assistants timed the interaction from outside the room and entered the room to consent patients and administer the survey after the physician departed. Survey questions were modeled on the physician communication questions from the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. We aggregated all answers other than the most positive answer because HCAHPS questions are analyzed according to a top box methodology. Adherence to the intervention was measured by asking the physician whether he or she actually sat or stood for each interaction. We administered a survey to physicians to collect demographics and feedback.
We estimated descriptive statistics for physician and patient participants using cross‐tabs and means. To estimate associations, we used logistic and linear regression that employed cluster‐adjusted t statistics and clustered patients within providers. This method optimizes estimation of standard errors (and corresponding confidence intervals and P values) when the number of clusters is small (16 physicians).[9] For our primary analysis, we analyzed as randomized using an intent‐to‐treat approach. In other words, those assigned to the standing group were analyzed in the standing group even if they actually sat (and vice versa). In a sensitivity analysis we used the same methods to analyze the data according actual provider posture as reported by the physician and not as randomized. We calculated the mean and range of the number of patients seen by physicians. We compared estimates of time spent between patients and providers and patients' satisfaction according to provider posture. We complied with the Consolidated Standards of Reporting Trials 2010 guidelines.[10] Our institutional review board approved this project. All participants provided written consent.
RESULTS
All 17 hospitalists attending on the service consented to participate; 1 did not see any patients involved in the study and was removed from the analysis. Sixty‐nine percent were female and 81% had been in practice for 3 years or less at the time of study enrollment; 94% reported standing when assigned to stand and 83% reported sitting when assigned to sit. We found 31% of physicians reported they routinely sat before participating in the study, and 81% said they would sit more after the study; this result approached statistical significance (exact McNemar P = 0.06). Of the 11 physicians who reported not routinely sitting before the study all, 7 cited not having a place to sit as a reason for not sitting. Other rationale provided included being too short to see the patient if seated, believing rounds would take more time if seated, and concerns about contact precautions. Comments in the postintervention survey regarding why providers planned to sit more centered around themes of having chairs available, thinking that sitting improves communication, and thinking that patients prefer providers to sit.
Two hundred eleven patients were assessed for eligibility. Fifty‐two were excluded (27 did not meet inclusion criteria and 25 declined to participate), leaving 159 participating patients. Seven patient‐physician pairs were inadvertently assigned the wrong intervention but were analyzed as randomized. There were no demographic differences between patient groups (Table 1). Physicians participating in the study saw an average of 13 study patients (range, 118) during the study. Mean time spent in the patient's room during rounds was 12:00 minutes for seated physicians and 12:10 for standing physicians (P = 0.84). Regardless of provider posture, patients overestimated the amount of time their physician spent in the room (mean difference 4:10 minutes, P = 0.01). Patients' estimates of the time the physician spent did not vary by posture (16:00 minutes for seated, 16:19 for standing, P = 0.86).
Patient Characteristics
Patients Seen by Seated Physician, N = 66
Patients Seen by Standing Physician, N = 93
P Value
n
%
n
%
Patient age, y
1839
16
25.4
25
27.5
0.59
4059
17
27.0
30
33.0
60+
30
47.6
36
39.6
Gender
Male
32
49.2
43
46.2
0.71
Female
33
50.8
50
53.8
Ethnicity
Caucasian
54
84.4
67
73.6
0.24
Asian or Pacific Islander
3
4.7
5
5.5
Other
7
10.9
19
20.9
Patients whose physician sat on rounds were statistically significantly more likely to choose the answer always to the questions regarding their physician listening carefully to them (P = 0.02) and explaining things in a way that was easy to understand (P = 0.05, Table 2). There was no difference in the patients' response to questions about the physician interrupting the patient when talking or treating them with courtesy and respect. Nearly all patients chose just right when asked to rate the amount of time their physician had spent with them on rounds (Table 2). The results of our sensitivity analysis that classified physicians according to their actual posture yielded different results; none of the findings in that analysis including questions regarding the physician listening carefully or explaining things in a way that was easy to understand were statistically significant (see Supporting Information, Appendix 1, in the online version of this article).
Patient Perceptions of Physician Communication
Patients Seen by Seated Physician, N = 66
Patients Seen by Standing Physician, N = 93
P Value
n
%
n
%
NOTE: All variables missing <5%. *Missing 6.9%.
Patient perception of physician communication on that day's rounds
Today on rounds, how often did this physician.
Explain things in a way that was easy to understand?
Never, sometimes, or usually
7
10.9
22
23.9
0.05
Always
57
89.1
71
76.1
Listen carefully to you?
Never, sometimes, or usually
4
6.1
19
20.4
0.02
Always
62
93.4
74
79.6
Interrupt you when you were talking?
Always, sometimes, or usually
4
6.5
9
10
0.46
Never
58
93.6
81
90
Treat you with courtesy and respect?
Never, sometimes, or usually
0
0
7
7.6
Not estimable
Always
63
100
85
92.4
Please rate the amount of time this physician spent with you today during morning rounds.
Too little
1
1.6
3
3.5
0.41
Just right
63
98.4
84
96.5
Did you have any important questions or concerns about your care that you did not bring up with this doctor today?*
Yes
4
6.6
9
10.3
0.26
No
57
94.4
78
89.7
DISCUSSION
In our study involving general medicine inpatients cared for by academic hospitalists, physicians did not spend more time in the room when seated, and were willing to adopt this practice. Patients perceived that seated compared to standing physicians listened more carefully and explained things in a way that was easy to understand when analyzed using an intent‐to‐treat approach. Patients did not perceive that seated physicians spent more time with them than standing physicians. To our knowledge, this is the first study showing the effects of hospitalist rounding posture on patient experience.
Our finding that patients rated seated physicians more highly on listening carefully and explaining things well indicates that training hospitalists to sit at the bedside may ultimately improve patient satisfaction. Our findings suggest seated interaction may improve satisfaction with communication without increasing time burden on physicians. However, given that these findings were not statistically significant when we analyzed our data according to actual behavior, larger studies should verify the impact of physician posture on patient experience.
Previous studies found that a minority of physicians sit in the inpatient setting, but did not study barriers to sitting while on rounds.[7, 8] A majority of physicians in our study sat when instructed to do so and when chairs were provided, and over 80% of physicians in our study said they planned to continue sitting while on rounds after the study was complete. A lack of chairs may be a major barrier to physicians adopting this facet of etiquette‐based medicine, and institutions wishing to promote this practice should consider providing chairs. Written comments from physician participants suggest physicians who are introduced to this practice enjoy sitting and think it improves physician‐patient communication. Further studies are needed to test our assumption that physicians continue sitting when chairs are provided.
Our work differs from previous studies. Johnson et al. studied interactions in the emergency room with a mean length of 8.6 minutes,[5] and Swayden et al. studied postoperative visits by a single neurosurgeon with a mean length of about 1 minute.[6] One explanation for the lack of a difference in time spent by posture might be that an average visit time of 12 minutes passes a threshold where patients make more accurate estimates of visit length or where factors other than posture more strongly influence perceptions of duration.
Limitations of our study include the relatively small sample size, single location, and limitation to English‐speaking patients able to consent themselves. Reasons for the limited sample size include that chairs were only installed in 2 units, and not all patients on the unit were under the care of participating physicians. Physician subjects were not blinded to their interactions being timed or to the fact that patients were surveyed about their communication skills. It is possible that factors that may have affected patients' responses such as severity of illness, number of consultants involved in their care, or prior experiences in the healthcare system were not equally distributed between our 2 groups. Additionally, our use of questions similar to those used in the HCAHPS instrument is not compliant with Centers for Medicare and Medicaid Services (CMS) policy. We caution others against using questions that might invalidate their hospital's participation in CMS payment programs.
Our study was limited to rounds involving 1 physician; our practice is that in a larger team the presenting member is encouraged to sit and others sit if there are additional chairs. Best practices on a teaching service are unclear and could be the subject of further study. The longer‐term sustainability of the practice of sitting on rounds is unclear. However, our physician subjects reported that they plan to continue to sit after the study, and we have shared the results with physicians in order to provide them with evidence supporting this practice. Not having a place to sit and thinking that sitting increases the amount of time spent on rounds were concerns provided in our preintervention survey, and we believe our study addresses these concerns.
Our study demonstrates the effects of a simple intervention on patient satisfaction without increasing burden on providers. Sitting at the bedside does not impact the amount of time spent with the patient, but may improve the patient's perception of the physician's communication skills and thus impact the patient experience. This simple intervention could improve patient satisfaction at little cost.
Acknowledgements
The authors acknowledge Tom Staiger, MD, UWMC Medical Director, for his assistance with obtaining chairs for this study.
Disclosure: Nothing to report.
References
Kahn M. Etiquette‐based medicine. N Engl J Med.2008;358(19):1988–1989.
Sorenson E, Malakouti M, Brown G, Koo J. Enhancing patient satisfaction in dermatology. Am J Clin Dermatol.2015;16:1–4.
The Studer Group. Q21:501–505.
Johnson RL, Sadosty AT, Weaver AL, Goyal DG. To sit or not to sit?Ann Emerg Med.2008;51:188–193.
Swayden KJ, Anderson KK, Connelly LM, Moran JS, McMahon JK, Arnold PM. Effect of sitting vs. standing on perception of provider time at bedside: a pilot study. Patient Educ Couns.2012;86(2):166–171.
Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the Practice of Etiquette‐Based Medicine in the Inpatient Setting. J Gen Intern Med.2013;28(7):908–913.
Block L, Hutzler L, Habicht R, et al. Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter. J Hosp Med.2013; 8:631–634.
Esarey J, Menger A. Practical and effective approaches to dealing with clustered data [unpublished manuscript]. Department of Political Science, Rice University, Houston, TX. Available at: http://jee3.web.rice.edu/cluster‐paper.pdf. Accessed February 29, 2016.
Schulz KF, Altman DG, Moher D. Consort 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med.2010;152(11):726–732.
References
Kahn M. Etiquette‐based medicine. N Engl J Med.2008;358(19):1988–1989.
Sorenson E, Malakouti M, Brown G, Koo J. Enhancing patient satisfaction in dermatology. Am J Clin Dermatol.2015;16:1–4.
The Studer Group. Q21:501–505.
Johnson RL, Sadosty AT, Weaver AL, Goyal DG. To sit or not to sit?Ann Emerg Med.2008;51:188–193.
Swayden KJ, Anderson KK, Connelly LM, Moran JS, McMahon JK, Arnold PM. Effect of sitting vs. standing on perception of provider time at bedside: a pilot study. Patient Educ Couns.2012;86(2):166–171.
Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the Practice of Etiquette‐Based Medicine in the Inpatient Setting. J Gen Intern Med.2013;28(7):908–913.
Block L, Hutzler L, Habicht R, et al. Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter. J Hosp Med.2013; 8:631–634.
Esarey J, Menger A. Practical and effective approaches to dealing with clustered data [unpublished manuscript]. Department of Political Science, Rice University, Houston, TX. Available at: http://jee3.web.rice.edu/cluster‐paper.pdf. Accessed February 29, 2016.
Schulz KF, Altman DG, Moher D. Consort 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med.2010;152(11):726–732.
Address for correspondence and reprint requests: Susan E. Merel, MD, 1959 NE Pacific Street, Box 356429, Seattle, WA 98195‐6429; Telephone: 206‐616‐4088; Fax: 206‐221‐8732; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Hand hygiene (HH) is believed to be one of the single most important interventions to prevent healthcare‐associated infection, yet physicians are notorious for their poor compliance.[1, 2, 3] At our 800‐bed acute care academic hospital, we implemented a multifaceted HH program[4] in 2007, which was associated with improved HH compliance rates from 43% to 87%. Despite this improvement, HH compliance among physicians remained suboptimal, with rates below 60% in some patient areas. A targeted campaign focused on recruitment of physician champions, resulted in some improvement, but physician compliance has consistently remained below performance of nurses (70%75% for physicians vs 85%90% for nurses).
Our experience parallels the results seen in multinational surveys demonstrating consistently lower physician HH compliance.[5] Given the multiple improvement efforts directed at physicians and the apparent ceiling observed in HH performance, we wanted to confirm whether physicians are truly recalcitrant to cleaning their hands, or whether lower compliance among physicians reflected a differential in the Hawthorne effect inherent to direct observation methods. Specifically, we wondered if nurses tend to recognize auditors more readily than physicians and therefore show higher apparent HH compliance when auditors are present. We also wanted to verify whether the behavior of attending physicians influenced compliance of their physician trainees. To test these hypotheses, we trained 2 clinical observers to covertly measure HH compliance of nurses and physicians on 3 different clinical services.
METHODS
Between May 27, 2015 and July 31, 2015, 2 student observers joined clinical rotations on physician and nursing teams, respectively. Healthcare teams were unaware that the student observers were measuring HH compliance during their clinical rotation. Students rotated in the emergency department, general medical and surgical wards for no more than 1 week at a time to increase exposure to different providers and minimize risk of exposing the covert observation.
Prior to the study period, the students underwent training and validation with a hospital HH auditor at another clinical setting offsite to avoid any recognition of these students by healthcare providers as observers of HH at the main hospital. Training with the auditors occurred until interobserver agreement between all HH opportunities reached 100% agreement for 2 consecutive observation days.
During their rotations, students covertly recorded HH compliance based on moments of hand hygiene[4] and also noted location, presence, and compliance of the attending physician, team size during patient encounter, and isolation requirements. Both students measured HH compliance of nurses and physicians around them. Although students spent the majority of their time with their assigned physician or nurse teams, they did not limit their observations to these individuals only, but recorded compliance of any nurse or physician on the ward as long as they were within sight during an HH opportunity. To limit clustering of observations of the same healthcare worker, up to a maximum of 2 observations per healthcare worker per day was recorded.
We compared covertly measured HH compliance with data from overt observation by hospital auditors during the same time period. Differences in proportion of HH compliance were compared with hospital audits during the same period with a 2 test. Difference between differences in overtly and covertly measured HH compliance for nurses and physicians was compared using Breslow day test.
The study was approved by the hospital's research ethics board. Although deception was used in this study,[2, 6] all data were collected for quality improvement purposes, and the aggregate results were disclosed to hospital staff following the study.
RESULTS
Covertly observed HH compliance was 50.0% (799/1597) compared with 83.7% (2769/3309) recorded by hospital auditors during the same time period (P < 0.0002) (Table 1). There was no significant difference in the compliance measured by each student (50.1%, 473/928 vs 48.7%, 326/669) (P = 0.3), and their results were combined for the rest of the analysis. Compliance before contact with the patient or patient environment was 43.1% (344/798), 74.3% (26/35) before clean/aseptic procedures, 34.8% (8/23) after potential body fluid exposure, and 56.8% (483/851) after contact with the patient or patient environment. Healthcare providers examining patients with isolation precautions were found to have a HH compliance of 74.8% (101/135) compared to 47.0% (385/820) when isolation precautions were not required (P < 0.0002).
Hand Hygiene Compliance Across Clinical Services and Professional Groupings as Measured by Covert Observers and Hospital Auditors During the Study Period
Covert Observers, Compliance (95% CI)
Hospital Auditors, Compliance (95% CI)
Difference
NOTE: Abbreviations: CI, confidence interval. *When attending physicians cleaned their hands. When attending physicians did not clean their hands.
Overall hand hygiene compliance
50.0% (47.6‐52.5)
83.7% (82.4‐84.9)
33.7%
Service
Medicine
58.9% (55.3‐62.5)
85.0% (82.7‐87.3)
26.1%
Surgery
45.7% (41.6‐49.8)
91.0% (87.5‐93.7)
45.3%
Emergency
43.9% (38.9‐49.9)
73.8% (68.9‐78.2)
29.9%
Nurses
45.1% (41.5‐48.7)
85.8% (83.3‐87.9)
40.7%
Physicians
Overall compliance
54.2% (50.9‐57.1)
73.2% (67.3‐78.4)
19.0%
Trainee compliance*
79.5% (73.6‐84.3)
Trainee compliance
18.9% (13.3‐26.1)
Hospital auditor data showed that surgery and medicine had similarly high rates of compliance (91.0% and 85.0%, respectively), whereas the emergency department had a notably lower rate of 73.8%. Covert observation confirmed a lower rate in the emergency department (43.9%), but showed a higher compliance on general medicine than on surgery (58.9% vs 45.7%; P = 0.02). The difference in physician compliance between hospital auditors and covert observers was 19.0% (73.2%, 175/239 vs. 54.2%, 469/865); for nurses this difference was much higher at 40.7% (85.8%, 754/879 vs. 45.1%, 330/732) (P < 0.0001) (Table 1).
In terms of physician compliance, primary teams tended to have lower HH compliance of 50.4% (323/641) compared with consulting services at 57.0% (158/277) (P = 0.06). Team rounds of 3 members were associated with higher compliance compared with encounters involving <3 members (62.1%, 282/454 vs. 42.0%, 128/308) (P < 0.0002). Presence of attending physician did not affect trainee HH compliance (55.5%, 201/362 when attending present vs. 56.8%, 133/234 when attending absent; P = 0.79). However, trainee HH compliance improved markedly when attending staff cleaned their hands and decreased markedly when they did not (79.5%, 174/219 vs. 18.9%, 27/143; P < 0.0002).
DISCUSSION
We introduced covert HH observers at our hospital to determine whether differences in Hawthorne effect accounted for measured disparity between physician HH compliance, and to gain further insights into the barriers and enablers of physician HH compliance. We discovered that performance differences between physicians and nurses decreased when neither group was aware that HH was being measured, suggesting that healthcare professions are differentially affected by the Hawthorne effect. This difference may be explained by the continuity of nurses on the ward that makes them more aware of visitors like HH auditors,[7] compared with physicians who rotate periodically on the ward.
Although hospital auditors play a central role in HH education through in‐the‐moment feedback, use of these data to benchmark performance can lead to inappropriate inferences about HH compliance. Prior studies using automated HH surveillance have suggested that the magnitude of the Hawthorne effect varies based on baseline HH rates,[8] whereas our study suggests a differential Hawthorne effect between professions and clinical services. If we relied only on auditor data, we would have continued to believe that only physicians in our organization had poor HH compliance, and we would not be aware of the global nature of the HH problem.
Our results are similar to that of Pan et al., who used covert medical students to measure HH and found compliance of 44.1% compared with 94.1% by unit auditors.[2] Because their study involved an active feedback intervention, the differential in Hawthorne effect between professions could not be reliably assessed. However, they observed a progressive increase in nurse HH compliance using covert observation methods, suggesting improvement in HH performance independent of observer bias.[7]
Covert observation in our study also provided important insights regarding barriers and enablers of HH compliance. Self‐preservation behaviors were common among both nurses and physicians, as HH compliance was consistently higher after patient contact compared to before or when seeing patients who required additional precautions. This finding confirms that the perceived risk of transmission seems to be a powerful motivating factor for HH.[9] Larger groups of trainees were more likely to clean their hands, likely due to peer effects.[10] The strong impact of role modeling on HH was also noted as previously suggested in the literature,[3, 6] but our study captures the magnitude of this effect. Whether or not the attending physician cleaned their hands during rounds either positively or negatively influenced HH compliance of the rest of the physician team (80% when compliant vs 20% when noncompliant).
Our study has several important limitations. The differential Hawthorne effect seen at our center may not reflect other institutions that have numerous HH auditors or high staff turnover resulting in lower ability to recognize auditors. We cannot exclude the possibility of Hawthorne effect using covert methods that could have affected nurse and physician performance differently, but frequent rotation of the students helped maintain covertness of observations. Finally, due to the nature of the covert student observers, a longer observation time frame could not be sustained.
Our experience using covert HH auditors suggests that traditional HH audits not only overstate HH performance overall, but can lead to inaccurate inferences regarding HH performance due to relative differences in Hawthorne effect. The answer to the question regarding whether physicians clean their hands appears to be that they do just as often as nurses, but that all healthcare workers have tremendous room for improvement. We suggest that future improvement efforts will rely on more accurate HH monitoring systems and strong attending physician leadership to set an example for trainees.
Disclosures
This study was jointly funded by the Centre for Quality Improvement and Patient Safety of the University of Toronto in collaboration with Sunnybrook Health Sciences Centre. All authors report no conflicts of interest relevant to this article.
World Health Organization. WHO guidelines on hand hygiene in health care. Available at: http://whqlibdoc.who.int/publications/2009/9789241597906_eng.pdf. Accessed April 4th, 2015.
Pan SC, Tien KL, Hung IC, et al. Compliance of health care workers with hand hygiene practices: independent advantages of overt and covert observers. PLoS One.2013;8:e53746.
Squires JE, Linklater S, Grimshaw JM, et al. Understanding practice: factors that influence physician hand hygiene compliance. Infect Control Hosp Epidemiol.2014;35:1511–1520.
(JCYH) Just Clean Your Hands. Ontario Agency for Health Promotion and Protection. Available at: http://www.publichealthontario.ca/en/BrowseByTopic/InfectiousDiseases/JustCleanYourHands/Pages/Just‐Clean‐Your‐Hands.aspx. Accessed August 4, 2015.
Allegranzi B, Gayet‐Ageron A, Damani N, et al. Global implementation of WHO's multimodal strategy for improvement of hand hygiene: a quasi‐experimental study. Lancet Infect Dis.2013;13:843–851.
Schneider J, Moromisato D, Zemetra B, et al. Hand hygiene adherence is influenced by the behavior of role models. Pediatr Crit Care Med.2009;10:360–363.
Srigley JA, Furness CD, Baker GR, Gardam M. Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: a retrospective cohort study. BMJ Qual Saf.2014;23:974–980.
Kohli E, Ptak J, Smith R, et al. Variability in the Hawthorne effect with regard to hand hygiene performance in high‐ and low‐performing inpatient care units. Infect Control Hosp Epidemiol.2009;30:222–225.
Borg MA, Benbachir M, Cookson BD, et al. Self‐protection as a driver for hand hygiene among healthcare workers. Infect Control.2009;30:578–580.
Monsalve MN, Pemmaraju SV, Thomas GW et al. Do peer effects improve hand hygiene adherence among healthcare workers?Infect Control Hosp Epidemiol.2014;35:1277–1285.
Hand hygiene (HH) is believed to be one of the single most important interventions to prevent healthcare‐associated infection, yet physicians are notorious for their poor compliance.[1, 2, 3] At our 800‐bed acute care academic hospital, we implemented a multifaceted HH program[4] in 2007, which was associated with improved HH compliance rates from 43% to 87%. Despite this improvement, HH compliance among physicians remained suboptimal, with rates below 60% in some patient areas. A targeted campaign focused on recruitment of physician champions, resulted in some improvement, but physician compliance has consistently remained below performance of nurses (70%75% for physicians vs 85%90% for nurses).
Our experience parallels the results seen in multinational surveys demonstrating consistently lower physician HH compliance.[5] Given the multiple improvement efforts directed at physicians and the apparent ceiling observed in HH performance, we wanted to confirm whether physicians are truly recalcitrant to cleaning their hands, or whether lower compliance among physicians reflected a differential in the Hawthorne effect inherent to direct observation methods. Specifically, we wondered if nurses tend to recognize auditors more readily than physicians and therefore show higher apparent HH compliance when auditors are present. We also wanted to verify whether the behavior of attending physicians influenced compliance of their physician trainees. To test these hypotheses, we trained 2 clinical observers to covertly measure HH compliance of nurses and physicians on 3 different clinical services.
METHODS
Between May 27, 2015 and July 31, 2015, 2 student observers joined clinical rotations on physician and nursing teams, respectively. Healthcare teams were unaware that the student observers were measuring HH compliance during their clinical rotation. Students rotated in the emergency department, general medical and surgical wards for no more than 1 week at a time to increase exposure to different providers and minimize risk of exposing the covert observation.
Prior to the study period, the students underwent training and validation with a hospital HH auditor at another clinical setting offsite to avoid any recognition of these students by healthcare providers as observers of HH at the main hospital. Training with the auditors occurred until interobserver agreement between all HH opportunities reached 100% agreement for 2 consecutive observation days.
During their rotations, students covertly recorded HH compliance based on moments of hand hygiene[4] and also noted location, presence, and compliance of the attending physician, team size during patient encounter, and isolation requirements. Both students measured HH compliance of nurses and physicians around them. Although students spent the majority of their time with their assigned physician or nurse teams, they did not limit their observations to these individuals only, but recorded compliance of any nurse or physician on the ward as long as they were within sight during an HH opportunity. To limit clustering of observations of the same healthcare worker, up to a maximum of 2 observations per healthcare worker per day was recorded.
We compared covertly measured HH compliance with data from overt observation by hospital auditors during the same time period. Differences in proportion of HH compliance were compared with hospital audits during the same period with a 2 test. Difference between differences in overtly and covertly measured HH compliance for nurses and physicians was compared using Breslow day test.
The study was approved by the hospital's research ethics board. Although deception was used in this study,[2, 6] all data were collected for quality improvement purposes, and the aggregate results were disclosed to hospital staff following the study.
RESULTS
Covertly observed HH compliance was 50.0% (799/1597) compared with 83.7% (2769/3309) recorded by hospital auditors during the same time period (P < 0.0002) (Table 1). There was no significant difference in the compliance measured by each student (50.1%, 473/928 vs 48.7%, 326/669) (P = 0.3), and their results were combined for the rest of the analysis. Compliance before contact with the patient or patient environment was 43.1% (344/798), 74.3% (26/35) before clean/aseptic procedures, 34.8% (8/23) after potential body fluid exposure, and 56.8% (483/851) after contact with the patient or patient environment. Healthcare providers examining patients with isolation precautions were found to have a HH compliance of 74.8% (101/135) compared to 47.0% (385/820) when isolation precautions were not required (P < 0.0002).
Hand Hygiene Compliance Across Clinical Services and Professional Groupings as Measured by Covert Observers and Hospital Auditors During the Study Period
Covert Observers, Compliance (95% CI)
Hospital Auditors, Compliance (95% CI)
Difference
NOTE: Abbreviations: CI, confidence interval. *When attending physicians cleaned their hands. When attending physicians did not clean their hands.
Overall hand hygiene compliance
50.0% (47.6‐52.5)
83.7% (82.4‐84.9)
33.7%
Service
Medicine
58.9% (55.3‐62.5)
85.0% (82.7‐87.3)
26.1%
Surgery
45.7% (41.6‐49.8)
91.0% (87.5‐93.7)
45.3%
Emergency
43.9% (38.9‐49.9)
73.8% (68.9‐78.2)
29.9%
Nurses
45.1% (41.5‐48.7)
85.8% (83.3‐87.9)
40.7%
Physicians
Overall compliance
54.2% (50.9‐57.1)
73.2% (67.3‐78.4)
19.0%
Trainee compliance*
79.5% (73.6‐84.3)
Trainee compliance
18.9% (13.3‐26.1)
Hospital auditor data showed that surgery and medicine had similarly high rates of compliance (91.0% and 85.0%, respectively), whereas the emergency department had a notably lower rate of 73.8%. Covert observation confirmed a lower rate in the emergency department (43.9%), but showed a higher compliance on general medicine than on surgery (58.9% vs 45.7%; P = 0.02). The difference in physician compliance between hospital auditors and covert observers was 19.0% (73.2%, 175/239 vs. 54.2%, 469/865); for nurses this difference was much higher at 40.7% (85.8%, 754/879 vs. 45.1%, 330/732) (P < 0.0001) (Table 1).
In terms of physician compliance, primary teams tended to have lower HH compliance of 50.4% (323/641) compared with consulting services at 57.0% (158/277) (P = 0.06). Team rounds of 3 members were associated with higher compliance compared with encounters involving <3 members (62.1%, 282/454 vs. 42.0%, 128/308) (P < 0.0002). Presence of attending physician did not affect trainee HH compliance (55.5%, 201/362 when attending present vs. 56.8%, 133/234 when attending absent; P = 0.79). However, trainee HH compliance improved markedly when attending staff cleaned their hands and decreased markedly when they did not (79.5%, 174/219 vs. 18.9%, 27/143; P < 0.0002).
DISCUSSION
We introduced covert HH observers at our hospital to determine whether differences in Hawthorne effect accounted for measured disparity between physician HH compliance, and to gain further insights into the barriers and enablers of physician HH compliance. We discovered that performance differences between physicians and nurses decreased when neither group was aware that HH was being measured, suggesting that healthcare professions are differentially affected by the Hawthorne effect. This difference may be explained by the continuity of nurses on the ward that makes them more aware of visitors like HH auditors,[7] compared with physicians who rotate periodically on the ward.
Although hospital auditors play a central role in HH education through in‐the‐moment feedback, use of these data to benchmark performance can lead to inappropriate inferences about HH compliance. Prior studies using automated HH surveillance have suggested that the magnitude of the Hawthorne effect varies based on baseline HH rates,[8] whereas our study suggests a differential Hawthorne effect between professions and clinical services. If we relied only on auditor data, we would have continued to believe that only physicians in our organization had poor HH compliance, and we would not be aware of the global nature of the HH problem.
Our results are similar to that of Pan et al., who used covert medical students to measure HH and found compliance of 44.1% compared with 94.1% by unit auditors.[2] Because their study involved an active feedback intervention, the differential in Hawthorne effect between professions could not be reliably assessed. However, they observed a progressive increase in nurse HH compliance using covert observation methods, suggesting improvement in HH performance independent of observer bias.[7]
Covert observation in our study also provided important insights regarding barriers and enablers of HH compliance. Self‐preservation behaviors were common among both nurses and physicians, as HH compliance was consistently higher after patient contact compared to before or when seeing patients who required additional precautions. This finding confirms that the perceived risk of transmission seems to be a powerful motivating factor for HH.[9] Larger groups of trainees were more likely to clean their hands, likely due to peer effects.[10] The strong impact of role modeling on HH was also noted as previously suggested in the literature,[3, 6] but our study captures the magnitude of this effect. Whether or not the attending physician cleaned their hands during rounds either positively or negatively influenced HH compliance of the rest of the physician team (80% when compliant vs 20% when noncompliant).
Our study has several important limitations. The differential Hawthorne effect seen at our center may not reflect other institutions that have numerous HH auditors or high staff turnover resulting in lower ability to recognize auditors. We cannot exclude the possibility of Hawthorne effect using covert methods that could have affected nurse and physician performance differently, but frequent rotation of the students helped maintain covertness of observations. Finally, due to the nature of the covert student observers, a longer observation time frame could not be sustained.
Our experience using covert HH auditors suggests that traditional HH audits not only overstate HH performance overall, but can lead to inaccurate inferences regarding HH performance due to relative differences in Hawthorne effect. The answer to the question regarding whether physicians clean their hands appears to be that they do just as often as nurses, but that all healthcare workers have tremendous room for improvement. We suggest that future improvement efforts will rely on more accurate HH monitoring systems and strong attending physician leadership to set an example for trainees.
Disclosures
This study was jointly funded by the Centre for Quality Improvement and Patient Safety of the University of Toronto in collaboration with Sunnybrook Health Sciences Centre. All authors report no conflicts of interest relevant to this article.
Hand hygiene (HH) is believed to be one of the single most important interventions to prevent healthcare‐associated infection, yet physicians are notorious for their poor compliance.[1, 2, 3] At our 800‐bed acute care academic hospital, we implemented a multifaceted HH program[4] in 2007, which was associated with improved HH compliance rates from 43% to 87%. Despite this improvement, HH compliance among physicians remained suboptimal, with rates below 60% in some patient areas. A targeted campaign focused on recruitment of physician champions, resulted in some improvement, but physician compliance has consistently remained below performance of nurses (70%75% for physicians vs 85%90% for nurses).
Our experience parallels the results seen in multinational surveys demonstrating consistently lower physician HH compliance.[5] Given the multiple improvement efforts directed at physicians and the apparent ceiling observed in HH performance, we wanted to confirm whether physicians are truly recalcitrant to cleaning their hands, or whether lower compliance among physicians reflected a differential in the Hawthorne effect inherent to direct observation methods. Specifically, we wondered if nurses tend to recognize auditors more readily than physicians and therefore show higher apparent HH compliance when auditors are present. We also wanted to verify whether the behavior of attending physicians influenced compliance of their physician trainees. To test these hypotheses, we trained 2 clinical observers to covertly measure HH compliance of nurses and physicians on 3 different clinical services.
METHODS
Between May 27, 2015 and July 31, 2015, 2 student observers joined clinical rotations on physician and nursing teams, respectively. Healthcare teams were unaware that the student observers were measuring HH compliance during their clinical rotation. Students rotated in the emergency department, general medical and surgical wards for no more than 1 week at a time to increase exposure to different providers and minimize risk of exposing the covert observation.
Prior to the study period, the students underwent training and validation with a hospital HH auditor at another clinical setting offsite to avoid any recognition of these students by healthcare providers as observers of HH at the main hospital. Training with the auditors occurred until interobserver agreement between all HH opportunities reached 100% agreement for 2 consecutive observation days.
During their rotations, students covertly recorded HH compliance based on moments of hand hygiene[4] and also noted location, presence, and compliance of the attending physician, team size during patient encounter, and isolation requirements. Both students measured HH compliance of nurses and physicians around them. Although students spent the majority of their time with their assigned physician or nurse teams, they did not limit their observations to these individuals only, but recorded compliance of any nurse or physician on the ward as long as they were within sight during an HH opportunity. To limit clustering of observations of the same healthcare worker, up to a maximum of 2 observations per healthcare worker per day was recorded.
We compared covertly measured HH compliance with data from overt observation by hospital auditors during the same time period. Differences in proportion of HH compliance were compared with hospital audits during the same period with a 2 test. Difference between differences in overtly and covertly measured HH compliance for nurses and physicians was compared using Breslow day test.
The study was approved by the hospital's research ethics board. Although deception was used in this study,[2, 6] all data were collected for quality improvement purposes, and the aggregate results were disclosed to hospital staff following the study.
RESULTS
Covertly observed HH compliance was 50.0% (799/1597) compared with 83.7% (2769/3309) recorded by hospital auditors during the same time period (P < 0.0002) (Table 1). There was no significant difference in the compliance measured by each student (50.1%, 473/928 vs 48.7%, 326/669) (P = 0.3), and their results were combined for the rest of the analysis. Compliance before contact with the patient or patient environment was 43.1% (344/798), 74.3% (26/35) before clean/aseptic procedures, 34.8% (8/23) after potential body fluid exposure, and 56.8% (483/851) after contact with the patient or patient environment. Healthcare providers examining patients with isolation precautions were found to have a HH compliance of 74.8% (101/135) compared to 47.0% (385/820) when isolation precautions were not required (P < 0.0002).
Hand Hygiene Compliance Across Clinical Services and Professional Groupings as Measured by Covert Observers and Hospital Auditors During the Study Period
Covert Observers, Compliance (95% CI)
Hospital Auditors, Compliance (95% CI)
Difference
NOTE: Abbreviations: CI, confidence interval. *When attending physicians cleaned their hands. When attending physicians did not clean their hands.
Overall hand hygiene compliance
50.0% (47.6‐52.5)
83.7% (82.4‐84.9)
33.7%
Service
Medicine
58.9% (55.3‐62.5)
85.0% (82.7‐87.3)
26.1%
Surgery
45.7% (41.6‐49.8)
91.0% (87.5‐93.7)
45.3%
Emergency
43.9% (38.9‐49.9)
73.8% (68.9‐78.2)
29.9%
Nurses
45.1% (41.5‐48.7)
85.8% (83.3‐87.9)
40.7%
Physicians
Overall compliance
54.2% (50.9‐57.1)
73.2% (67.3‐78.4)
19.0%
Trainee compliance*
79.5% (73.6‐84.3)
Trainee compliance
18.9% (13.3‐26.1)
Hospital auditor data showed that surgery and medicine had similarly high rates of compliance (91.0% and 85.0%, respectively), whereas the emergency department had a notably lower rate of 73.8%. Covert observation confirmed a lower rate in the emergency department (43.9%), but showed a higher compliance on general medicine than on surgery (58.9% vs 45.7%; P = 0.02). The difference in physician compliance between hospital auditors and covert observers was 19.0% (73.2%, 175/239 vs. 54.2%, 469/865); for nurses this difference was much higher at 40.7% (85.8%, 754/879 vs. 45.1%, 330/732) (P < 0.0001) (Table 1).
In terms of physician compliance, primary teams tended to have lower HH compliance of 50.4% (323/641) compared with consulting services at 57.0% (158/277) (P = 0.06). Team rounds of 3 members were associated with higher compliance compared with encounters involving <3 members (62.1%, 282/454 vs. 42.0%, 128/308) (P < 0.0002). Presence of attending physician did not affect trainee HH compliance (55.5%, 201/362 when attending present vs. 56.8%, 133/234 when attending absent; P = 0.79). However, trainee HH compliance improved markedly when attending staff cleaned their hands and decreased markedly when they did not (79.5%, 174/219 vs. 18.9%, 27/143; P < 0.0002).
DISCUSSION
We introduced covert HH observers at our hospital to determine whether differences in Hawthorne effect accounted for measured disparity between physician HH compliance, and to gain further insights into the barriers and enablers of physician HH compliance. We discovered that performance differences between physicians and nurses decreased when neither group was aware that HH was being measured, suggesting that healthcare professions are differentially affected by the Hawthorne effect. This difference may be explained by the continuity of nurses on the ward that makes them more aware of visitors like HH auditors,[7] compared with physicians who rotate periodically on the ward.
Although hospital auditors play a central role in HH education through in‐the‐moment feedback, use of these data to benchmark performance can lead to inappropriate inferences about HH compliance. Prior studies using automated HH surveillance have suggested that the magnitude of the Hawthorne effect varies based on baseline HH rates,[8] whereas our study suggests a differential Hawthorne effect between professions and clinical services. If we relied only on auditor data, we would have continued to believe that only physicians in our organization had poor HH compliance, and we would not be aware of the global nature of the HH problem.
Our results are similar to that of Pan et al., who used covert medical students to measure HH and found compliance of 44.1% compared with 94.1% by unit auditors.[2] Because their study involved an active feedback intervention, the differential in Hawthorne effect between professions could not be reliably assessed. However, they observed a progressive increase in nurse HH compliance using covert observation methods, suggesting improvement in HH performance independent of observer bias.[7]
Covert observation in our study also provided important insights regarding barriers and enablers of HH compliance. Self‐preservation behaviors were common among both nurses and physicians, as HH compliance was consistently higher after patient contact compared to before or when seeing patients who required additional precautions. This finding confirms that the perceived risk of transmission seems to be a powerful motivating factor for HH.[9] Larger groups of trainees were more likely to clean their hands, likely due to peer effects.[10] The strong impact of role modeling on HH was also noted as previously suggested in the literature,[3, 6] but our study captures the magnitude of this effect. Whether or not the attending physician cleaned their hands during rounds either positively or negatively influenced HH compliance of the rest of the physician team (80% when compliant vs 20% when noncompliant).
Our study has several important limitations. The differential Hawthorne effect seen at our center may not reflect other institutions that have numerous HH auditors or high staff turnover resulting in lower ability to recognize auditors. We cannot exclude the possibility of Hawthorne effect using covert methods that could have affected nurse and physician performance differently, but frequent rotation of the students helped maintain covertness of observations. Finally, due to the nature of the covert student observers, a longer observation time frame could not be sustained.
Our experience using covert HH auditors suggests that traditional HH audits not only overstate HH performance overall, but can lead to inaccurate inferences regarding HH performance due to relative differences in Hawthorne effect. The answer to the question regarding whether physicians clean their hands appears to be that they do just as often as nurses, but that all healthcare workers have tremendous room for improvement. We suggest that future improvement efforts will rely on more accurate HH monitoring systems and strong attending physician leadership to set an example for trainees.
Disclosures
This study was jointly funded by the Centre for Quality Improvement and Patient Safety of the University of Toronto in collaboration with Sunnybrook Health Sciences Centre. All authors report no conflicts of interest relevant to this article.
References
World Health Organization. WHO guidelines on hand hygiene in health care. Available at: http://whqlibdoc.who.int/publications/2009/9789241597906_eng.pdf. Accessed April 4th, 2015.
Pan SC, Tien KL, Hung IC, et al. Compliance of health care workers with hand hygiene practices: independent advantages of overt and covert observers. PLoS One.2013;8:e53746.
Squires JE, Linklater S, Grimshaw JM, et al. Understanding practice: factors that influence physician hand hygiene compliance. Infect Control Hosp Epidemiol.2014;35:1511–1520.
(JCYH) Just Clean Your Hands. Ontario Agency for Health Promotion and Protection. Available at: http://www.publichealthontario.ca/en/BrowseByTopic/InfectiousDiseases/JustCleanYourHands/Pages/Just‐Clean‐Your‐Hands.aspx. Accessed August 4, 2015.
Allegranzi B, Gayet‐Ageron A, Damani N, et al. Global implementation of WHO's multimodal strategy for improvement of hand hygiene: a quasi‐experimental study. Lancet Infect Dis.2013;13:843–851.
Schneider J, Moromisato D, Zemetra B, et al. Hand hygiene adherence is influenced by the behavior of role models. Pediatr Crit Care Med.2009;10:360–363.
Srigley JA, Furness CD, Baker GR, Gardam M. Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: a retrospective cohort study. BMJ Qual Saf.2014;23:974–980.
Kohli E, Ptak J, Smith R, et al. Variability in the Hawthorne effect with regard to hand hygiene performance in high‐ and low‐performing inpatient care units. Infect Control Hosp Epidemiol.2009;30:222–225.
Borg MA, Benbachir M, Cookson BD, et al. Self‐protection as a driver for hand hygiene among healthcare workers. Infect Control.2009;30:578–580.
Monsalve MN, Pemmaraju SV, Thomas GW et al. Do peer effects improve hand hygiene adherence among healthcare workers?Infect Control Hosp Epidemiol.2014;35:1277–1285.
References
World Health Organization. WHO guidelines on hand hygiene in health care. Available at: http://whqlibdoc.who.int/publications/2009/9789241597906_eng.pdf. Accessed April 4th, 2015.
Pan SC, Tien KL, Hung IC, et al. Compliance of health care workers with hand hygiene practices: independent advantages of overt and covert observers. PLoS One.2013;8:e53746.
Squires JE, Linklater S, Grimshaw JM, et al. Understanding practice: factors that influence physician hand hygiene compliance. Infect Control Hosp Epidemiol.2014;35:1511–1520.
(JCYH) Just Clean Your Hands. Ontario Agency for Health Promotion and Protection. Available at: http://www.publichealthontario.ca/en/BrowseByTopic/InfectiousDiseases/JustCleanYourHands/Pages/Just‐Clean‐Your‐Hands.aspx. Accessed August 4, 2015.
Allegranzi B, Gayet‐Ageron A, Damani N, et al. Global implementation of WHO's multimodal strategy for improvement of hand hygiene: a quasi‐experimental study. Lancet Infect Dis.2013;13:843–851.
Schneider J, Moromisato D, Zemetra B, et al. Hand hygiene adherence is influenced by the behavior of role models. Pediatr Crit Care Med.2009;10:360–363.
Srigley JA, Furness CD, Baker GR, Gardam M. Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: a retrospective cohort study. BMJ Qual Saf.2014;23:974–980.
Kohli E, Ptak J, Smith R, et al. Variability in the Hawthorne effect with regard to hand hygiene performance in high‐ and low‐performing inpatient care units. Infect Control Hosp Epidemiol.2009;30:222–225.
Borg MA, Benbachir M, Cookson BD, et al. Self‐protection as a driver for hand hygiene among healthcare workers. Infect Control.2009;30:578–580.
Monsalve MN, Pemmaraju SV, Thomas GW et al. Do peer effects improve hand hygiene adherence among healthcare workers?Infect Control Hosp Epidemiol.2014;35:1277–1285.
Internal medicine (IM) residents and hospitalist physicians commonly conduct bedside thoracenteses for both diagnostic and therapeutic purposes.[1] The American Board of Internal Medicine only requires that certification candidates understand the indications, complications, and management of thoracenteses.[2] A disconnect between clinical practice patterns and board requirements may increase patient risk because poorly trained physicians are more likely to cause complications.[3] National practice patterns show that many thoracenteses are referred to interventional radiology (IR).[4] However, research links performance of bedside procedures to reduced hospital length of stay and lower costs, without increasing risk of complications.[1, 5, 6]
Simulation‐based education offers a controlled environment where trainees improve procedural knowledge and skills without patient harm.[7] Simulation‐based mastery learning (SBML) is a rigorous form of competency‐based education that improves clinical skills and reduces iatrogenic complications and healthcare costs.[5, 6, 8] SBML also is an effective method to boost thoracentesis skills among IM residents.[9] However, there are no data to show that thoracentesis skills acquired in the simulation laboratory transfer to clinical environments and affect referral patterns.
We hypothesized that a thoracentesis SBML intervention would improve skills and increase procedural self‐confidence while reducing procedure referrals. This study aimed to (1) assess the effect of thoracentesis SBML on a cohort of IM residents' simulated skills and (2) compare traditionally trained (nonSBML‐trained) residents, SBML‐trained residents, and hospitalist physicians regarding procedure referral patterns, self‐confidence, procedure experience, and reasons for referral.
METHODS AND MATERIALS
Study Design
We surveyed physicians about thoracenteses performed on patients cared for by postgraduate year (PGY)‐2 and PGY‐3 IM residents and hospitalist physicians at Northwestern Memorial Hospital (NMH) from December 2012 to May 2015. NMH is an 896‐bed, tertiary academic medical center, located in Chicago, Illinois. A random sample of IM residents participated in a thoracentesis SBML intervention, whereas hospitalist physicians did not. We compared referral patterns, self‐confidence, procedure experience, and reasons for referral between traditionally trained residents, SBML‐trained residents, and hospitalist physicians. The Northwestern University Institutional Review Board approved this study, and all study participants provided informed consent.
At NMH, resident‐staffed services include general IM and nonintensive care subspecialty medical services. There are also 2 nonteaching floors staffed by hospitalist attending physicians without residents. Thoracenteses performed on these services can either be done at the bedside or referred to pulmonary medicine or IR. The majority of thoracenteses performed by pulmonary medicine occur at the patients' bedside, and the patients also receive a clinical consultation. IR procedures are done in the IR suite without additional clinical consultation.
Procedure
One hundred sixty residents were available for training over the study period. We randomly selected 20% of the approximately 20 PGY‐2 and PGY‐3 IM residents assigned to the NMH medicine services each month to participate in SBML thoracentesis training before their rotation. Randomly selected residents were required to undergo SBML training but were not required to participate in the study. This selection process was repeated before every rotation during the study period. This randomized wait‐list control method allowed residents to serve as controls if not initially selected for training and remain eligible for SBML training in subsequent rotations.
Intervention
The SBML intervention used a pretest/post‐test design, as described elsewhere.[9] Residents completed a clinical skills pretest on a thoracentesis simulator using a previously published 26‐item checklist.[9] Following the pretest, residents participated in 2, 1‐hour training sessions including a lecture, video, and deliberate practice on the simulator with feedback from an expert instructor. Finally, residents completed a clinical skills post‐test using the checklist within 1 week from training (but on a different day) and were required to meet or exceed an 84.3% minimum passing score (MPS). The entire training, including pre‐ and post‐tests, took approximately 3 hours to complete, and residents were given an additional 1 hour refresher training every 6 months for up to a year after original training. We compared pre‐ and post‐test checklist scores to evaluate skills improvement.
Thoracentesis Patient Identification
The NMH electronic health record (EHR) was used to identify medical service inpatients who underwent a thoracentesis during the study period. NMH clinicians must place an EHR order for procedure kits, consults, and laboratory analysis of thoracentesis fluid. We developed a real‐time query of NMH's EHR that identified all patients with electronic orders for thoracenteses and monitored this daily.
Physician Surveys
After each thoracentesis, we surveyed the PGY‐2 or PGY‐3 resident or hospitalist caring for the patient about the procedure. A research coordinator, blind to whether the resident received SBML, performed the surveys face‐to‐face on Monday to Friday during normal business hours. Residents were not considered SBML‐trained until they met or exceeded the MPS on the simulated skills checklist at post‐test. Surveys occurred on Monday for procedures performed on Friday evening through Sunday. Survey questions asked physicians about who performed the procedure, their procedural self‐confidence, and total number of thoracenteses performed in their career. For referred procedures, physicians were asked about reasons for referral including lack of confidence, work hour restrictions (residents only), and low reimbursement rates.[10] There was also an option to add other reasons.
Measurement
The thoracentesis skills checklist documented all required steps for an evidence‐based thoracentesis. Each task received equal weight (0 = done incorrectly/not done, 1 = done correctly).[9] For physician surveys, self‐confidence about performing the procedure was rated on a scale of 0 = not confident to 100 = very confident. Reasons for referral were scored on a Likert scale 1 to 5 (1 = not at all important, 5 = very important). Other reasons for referral were categorized.
Statistical Analysis
The clinical skills pre‐ and post‐test checklist scores were compared using a Wilcoxon matched pairs rank test. Physician survey data were compared between different procedure performers using the 2 test, independent t test, analysis of variance (ANOVA), or Kruskal‐Wallis test depending on data properties. Referral patterns measured by the Likert scale were averaged, and differences between physician groups were evaluated using ANOVA. Counts of other reasons for referral were compared using the 2 test. We performed all statistical analyses using IBM SPSS Statistics version 23 (IBM Corp., Armonk, NY).
RESULTS
Thoracentesis Clinical Skills
One hundred twelve (70%) residents were randomized to SBML, and all completed the protocol. Median pretest scores were 57.6% (interquartile range [IQR] 43.376.9), and final post‐test mastery scores were 96.2 (IQR 96.2100.0; P < 0.001). Twenty‐three residents (21.0%) failed to meet the MPS at initial post‐test, but met the MPS on retest after <1 hour of additional training.
Physician Surveys
The EHR query identified 474 procedures eligible for physician surveys. One hundred twenty‐two residents and 51 hospitalist physicians completed surveys for 472 procedures (99.6%); 182 patients by traditionally trained residents, 145 by SBML‐trained residents, and 145 by hospitalist physicians. As shown in Table 1, 413 (88%) of all procedures were referred to another service. Traditionally trained residents were more likely to refer to IR compared to SBML‐trained residents or hospitalist physicians. SBML‐trained residents were more likely to perform bedside procedures, whereas hospitalist physicians were most likely to refer to pulmonary medicine. SBML‐trained residents were most confident in their procedural skills, despite hospitalist physicians performing more actual procedures.
Characteristics of 472 Thoracentesis Procedures Described on Surveys of Traditionally Trained Residents, SBML‐Trained Residents, and Hospitalist Physicians
Traditionally Trained Resident Surveys, n = 182
SBML‐Trained Resident Surveys, n = 145
Hospitalist Physician Surveys, n = 145
P Value
NOTE: Abbreviations: IQR, interquartile range; IR, interventional radiology; SBML, simulation‐based mastery learning; SD, standard deviation. *Scale of 0 = not at all confident to 100 = very confident.
Bedside procedures, no. (%)
26 (14.3%)
32 (22.1%)
1 (0.7%)
<0.001
IR procedures, no. (%)
119 (65.4%)
74 (51.0%)
82 (56.6%)
0.029
Pulmonary procedures, no. (%)
37 (20.3%)
39 (26.9%)
62 (42.8%)
<0.001
Procedure self‐confidence, mean (SD)*
43.6 (28.66)
68.2 (25.17)
55.7 (31.17)
<0.001
Experience performing actual procedures, median (IQR)
1 (13)
2 (13.5)
10 (425)
<0.001
Traditionally trained residents were most likely to rate low confidence as reasons why they referred thoracenteses (Table 2). Hospitalist physicians were more likely to cite lack of time to perform the procedure themselves. Other reasons were different across groups. SBML‐trained residents were more likely to refer because of attending preference, whereas traditionally trained residents were mostly like to refer because of high risk/technically difficult cases.
Reasons Provided for Referral of 413 Thoracentesis Procedures Between Traditionally Trained Residents, SBML‐Trained Residents, and Hospitalist Physicians
Traditionally Trained Residents, n = 156
SBML‐Trained Residents, n = 113
Hospitalist Physicians, n = 144
P Value
NOTE: Abbreviations: IR, interventional radiology; SBML, simulation‐based mastery learning; SD, standard deviation. *Mean score on a 5‐point Likert scale (1 = not at all important, 5 = very important). Some expected counts are less than 5; 2 test may be invalid.
Lack of confidence to perform procedure, mean (SD)*
3.46 (1.32)
2.52 (1.45)
2.89 (1.60)
<0.001
Work hour restrictions, mean (SD) *
2.05 (1.37)
1.50 (1.11)
n/a
0.001
Low reimbursement, mean (SD)*
1.02 (0.12)
1.0 (0)
1.22 (0.69)
<0.001
Other reasons for referral, no. (%)
Attending preference
8 (5.1%)
11 (9.7%)
3 (2.1%)
0.025
Don't know how
6 (3.8%)
0
0
0.007
Failed bedside
0
2 (1.8%)
0
0.07
High risk/technically difficult case
24 (15.4%)
12 (10.6%)
5 (3.5%)
0.003
IR or pulmonary patient
5 (3.2%)
2 (1.8%)
4 (2.8%)
0.77
Other IR procedure taking place
11 (7.1%)
9 (8.0%)
4 (2.8%)
0.13
Patient preference
2 (1.3%)
7 (6.2%)
2 (3.5%)
0.024
Time
9 (5.8%)
7 (6.2%)
29 (20.1%)
<0.001
DISCUSSION
This study confirms earlier research showing that thoracentesis SBML improves residents' clinical skills, but is the first to use a randomized study design.[9] Use of the mastery model in health professions education ensures that all learners are competent to provide patient care including performing invasive procedures. Such rigorous education yields downstream translational outcomes including safety profiles comparable to experts.[1, 6]
This study also shows that SBML‐trained residents displayed higher self‐confidence and performed significantly more bedside procedures than traditionally trained residents and more experienced hospitalist physicians. Although the Society of Hospital Medicine considers thoracentesis skills a core competency for hospitalist physicians,[11] we speculate that some hospitalist physicians had not performed a thoracentesis in years. A recent national survey showed that only 44% of hospitalist physicians performed at least 1 thoracentesis within the past year.[10] Research also shows a shift in medical culture to refer procedures to specialty services, such as IR, by over 900% in the past 2 decades.[4] Our results provide novel information about procedure referrals because we show that SBML provides translational outcomes by improving skills and self‐confidence that influence referral patterns. SBML‐trained residents performed almost a quarter of procedures at the bedside. Although this only represents an 8% absolute difference in bedside procedures compared to traditionally trained residents, if a large number of residents are trained using SBML this results in a meaningful number of procedures shifted to the patient bedside. According to University HealthSystem Consortium data, in US teaching hospitals, approximately 35,325 thoracenteses are performed yearly.[1] Shifting even 8% of these procedures to the bedside would result in significant clinical benefit and cost savings. Reduced referrals increase additional bedside procedures that are safe, cost‐effective, and highly satisfying to patients.[1, 12, 13] Further study is required to determine the impact on referral patterns after providing SMBL training to attending physicians.
Our study also provides information about the rationale for procedure referrals. Earlier work speculates that financial incentive, training and time may explain high procedure referral rates.[10] One report on IM residents noted an 87% IR referral rate for thoracentesis, and confirmed that both training and time were major reasons.[14] Hospitalist physicians reported lack of time as the major factor leading to procedural referrals, which is problematic because bedside procedures yield similar clinical outcomes at lower costs.[1, 12] Attending preference also prevented 11 additional bedside procedures in the SBML‐trained group. Schedule adjustments and SBML training of hospitalist physicians should be considered, because bundled payments in the Affordable Care Act may favor shifting to the higher‐value approach of bedside thoracenteses.[15]
Our study has several limitations. First, we only performed surveys at 1 institution and the results may not be generalizable. Second, we relied on an electronic query to alert us to thoracenteses. Our query may have missed procedures that were unsuccessful or did not have EHR orders entered. Third, physicians may have been surveyed more than once for different or the same patient(s), but opinions may have shifted over time. Fourth, some items such as time needed to be written in the survey and were not specifically asked. This could have resulted in under‐reporting. Finally, we did not assess the clinical outcomes of thoracenteses in this study, although earlier work shows that residents who complete SBML have safety outcomes similar to IR.[1, 6]
In summary, IM residents who complete thoracentesis SBML demonstrate improved clinical skills and are more likely to perform bedside procedures. In an era of bundled payments, rethinking current care models to promote cost‐effective care is necessary. We believe providing additional education, training, and support to hospitalist physicians to promote bedside procedures is a promising strategy that warrants further study.
Acknowledgements
The authors acknowledge Drs. Douglas Vaughan and Kevin O'Leary for their support and encouragement of this work. The authors also thank the internal medicine residents at Northwestern for their dedication to patient care.
Disclosures: This project was supported by grant R18HS021202‐01 from the Agency for Healthcare Research and Quality (AHRQ). AHRQ had no role in the preparation, review, or approval of the manuscript. Trial Registration: ClinicalTrials.gov NCT01898247 (https://clinicaltrials.gov/ct2/show/NCT01898247?term=thoracentesis+and+simulation& rank=1). The authors report no conflicts of interest.
Kozmic SE, Wayne DB, Feinglass J, Hohmann SF, Barsuk JH. Thoracentesis procedures at university hospitals: comparing outcomes by specialty. Jt Comm J Qual Patient Saf.2015;42(1):34–40.
American Board of Internal Medicine. Internal medicine policies. Available at: http://www.abim.org/certification/policies/internal‐medicine‐subspecialty‐policies/internal‐medicine.aspx. Accessed March 9, 2016.
Gordon CE, Feller‐Kopman D, Balk EM, Smetana GW. Pneumothorax following thoracentesis: a systematic review and meta‐analysis. Arch Intern Med.2010;170(4):332–339.
Duszak R, Chatterjee AR, Schneider DA. National fluid shifts: fifteen‐year trends in paracentesis and thoracentesis procedures. J Am Coll Radiol.2010;7(11):859–864.
Barsuk JH, Cohen ER, Feinglass J, et al. Cost savings of performing paracentesis procedures at the bedside after simulation‐based education. Simul Healthc.2014;9(5):312–318.
Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Clinical outcomes after bedside and interventional radiology paracentesis procedures. Am J Med.2013;126(4):349–356.
Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA.1999;282(9):861–866.
Cohen ER, Feinglass J, Barsuk JH, et al. Cost savings from reduced catheter‐related bloodstream infection after simulation‐based education for residents in a medical intensive care unit. Simul Healthc.2010;5(2):98–102.
Wayne DB, Barsuk JH, O'Leary KJ, Fudala MJ, McGaghie WC. Mastery learning of thoracentesis skills by internal medicine residents using simulation technology and deliberate practice. J Hosp Med.2008;3(1):48–54.
Thakkar R, Wright SM, Alguire P, Wigton RS, Boonyasai RT. Procedures performed by hospitalist and non‐hospitalist general internists. J Gen Intern Med.2010;25(5):448–452.
Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med.2006;1(suppl 1):48–56.
Barsuk JH, Feinglass J, Kozmic SE, Hohmann SF, Ganger D, Wayne DB. Specialties performing paracentesis procedures at university hospitals: implications for training and certification. J Hosp Med.2014;9(3):162–168.
Barsuk JH, Kozmic SE, Scher J, Feinglass J, Hoyer A, Wayne DB. Are we providing patient‐centered care? Preferences about paracentesis and thoracentesis procedures. Patient Exp J.2014;1(2):94–103. Available at: http://pxjournal.org/cgi/viewcontent.cgi?article=1024
Internal medicine (IM) residents and hospitalist physicians commonly conduct bedside thoracenteses for both diagnostic and therapeutic purposes.[1] The American Board of Internal Medicine only requires that certification candidates understand the indications, complications, and management of thoracenteses.[2] A disconnect between clinical practice patterns and board requirements may increase patient risk because poorly trained physicians are more likely to cause complications.[3] National practice patterns show that many thoracenteses are referred to interventional radiology (IR).[4] However, research links performance of bedside procedures to reduced hospital length of stay and lower costs, without increasing risk of complications.[1, 5, 6]
Simulation‐based education offers a controlled environment where trainees improve procedural knowledge and skills without patient harm.[7] Simulation‐based mastery learning (SBML) is a rigorous form of competency‐based education that improves clinical skills and reduces iatrogenic complications and healthcare costs.[5, 6, 8] SBML also is an effective method to boost thoracentesis skills among IM residents.[9] However, there are no data to show that thoracentesis skills acquired in the simulation laboratory transfer to clinical environments and affect referral patterns.
We hypothesized that a thoracentesis SBML intervention would improve skills and increase procedural self‐confidence while reducing procedure referrals. This study aimed to (1) assess the effect of thoracentesis SBML on a cohort of IM residents' simulated skills and (2) compare traditionally trained (nonSBML‐trained) residents, SBML‐trained residents, and hospitalist physicians regarding procedure referral patterns, self‐confidence, procedure experience, and reasons for referral.
METHODS AND MATERIALS
Study Design
We surveyed physicians about thoracenteses performed on patients cared for by postgraduate year (PGY)‐2 and PGY‐3 IM residents and hospitalist physicians at Northwestern Memorial Hospital (NMH) from December 2012 to May 2015. NMH is an 896‐bed, tertiary academic medical center, located in Chicago, Illinois. A random sample of IM residents participated in a thoracentesis SBML intervention, whereas hospitalist physicians did not. We compared referral patterns, self‐confidence, procedure experience, and reasons for referral between traditionally trained residents, SBML‐trained residents, and hospitalist physicians. The Northwestern University Institutional Review Board approved this study, and all study participants provided informed consent.
At NMH, resident‐staffed services include general IM and nonintensive care subspecialty medical services. There are also 2 nonteaching floors staffed by hospitalist attending physicians without residents. Thoracenteses performed on these services can either be done at the bedside or referred to pulmonary medicine or IR. The majority of thoracenteses performed by pulmonary medicine occur at the patients' bedside, and the patients also receive a clinical consultation. IR procedures are done in the IR suite without additional clinical consultation.
Procedure
One hundred sixty residents were available for training over the study period. We randomly selected 20% of the approximately 20 PGY‐2 and PGY‐3 IM residents assigned to the NMH medicine services each month to participate in SBML thoracentesis training before their rotation. Randomly selected residents were required to undergo SBML training but were not required to participate in the study. This selection process was repeated before every rotation during the study period. This randomized wait‐list control method allowed residents to serve as controls if not initially selected for training and remain eligible for SBML training in subsequent rotations.
Intervention
The SBML intervention used a pretest/post‐test design, as described elsewhere.[9] Residents completed a clinical skills pretest on a thoracentesis simulator using a previously published 26‐item checklist.[9] Following the pretest, residents participated in 2, 1‐hour training sessions including a lecture, video, and deliberate practice on the simulator with feedback from an expert instructor. Finally, residents completed a clinical skills post‐test using the checklist within 1 week from training (but on a different day) and were required to meet or exceed an 84.3% minimum passing score (MPS). The entire training, including pre‐ and post‐tests, took approximately 3 hours to complete, and residents were given an additional 1 hour refresher training every 6 months for up to a year after original training. We compared pre‐ and post‐test checklist scores to evaluate skills improvement.
Thoracentesis Patient Identification
The NMH electronic health record (EHR) was used to identify medical service inpatients who underwent a thoracentesis during the study period. NMH clinicians must place an EHR order for procedure kits, consults, and laboratory analysis of thoracentesis fluid. We developed a real‐time query of NMH's EHR that identified all patients with electronic orders for thoracenteses and monitored this daily.
Physician Surveys
After each thoracentesis, we surveyed the PGY‐2 or PGY‐3 resident or hospitalist caring for the patient about the procedure. A research coordinator, blind to whether the resident received SBML, performed the surveys face‐to‐face on Monday to Friday during normal business hours. Residents were not considered SBML‐trained until they met or exceeded the MPS on the simulated skills checklist at post‐test. Surveys occurred on Monday for procedures performed on Friday evening through Sunday. Survey questions asked physicians about who performed the procedure, their procedural self‐confidence, and total number of thoracenteses performed in their career. For referred procedures, physicians were asked about reasons for referral including lack of confidence, work hour restrictions (residents only), and low reimbursement rates.[10] There was also an option to add other reasons.
Measurement
The thoracentesis skills checklist documented all required steps for an evidence‐based thoracentesis. Each task received equal weight (0 = done incorrectly/not done, 1 = done correctly).[9] For physician surveys, self‐confidence about performing the procedure was rated on a scale of 0 = not confident to 100 = very confident. Reasons for referral were scored on a Likert scale 1 to 5 (1 = not at all important, 5 = very important). Other reasons for referral were categorized.
Statistical Analysis
The clinical skills pre‐ and post‐test checklist scores were compared using a Wilcoxon matched pairs rank test. Physician survey data were compared between different procedure performers using the 2 test, independent t test, analysis of variance (ANOVA), or Kruskal‐Wallis test depending on data properties. Referral patterns measured by the Likert scale were averaged, and differences between physician groups were evaluated using ANOVA. Counts of other reasons for referral were compared using the 2 test. We performed all statistical analyses using IBM SPSS Statistics version 23 (IBM Corp., Armonk, NY).
RESULTS
Thoracentesis Clinical Skills
One hundred twelve (70%) residents were randomized to SBML, and all completed the protocol. Median pretest scores were 57.6% (interquartile range [IQR] 43.376.9), and final post‐test mastery scores were 96.2 (IQR 96.2100.0; P < 0.001). Twenty‐three residents (21.0%) failed to meet the MPS at initial post‐test, but met the MPS on retest after <1 hour of additional training.
Physician Surveys
The EHR query identified 474 procedures eligible for physician surveys. One hundred twenty‐two residents and 51 hospitalist physicians completed surveys for 472 procedures (99.6%); 182 patients by traditionally trained residents, 145 by SBML‐trained residents, and 145 by hospitalist physicians. As shown in Table 1, 413 (88%) of all procedures were referred to another service. Traditionally trained residents were more likely to refer to IR compared to SBML‐trained residents or hospitalist physicians. SBML‐trained residents were more likely to perform bedside procedures, whereas hospitalist physicians were most likely to refer to pulmonary medicine. SBML‐trained residents were most confident in their procedural skills, despite hospitalist physicians performing more actual procedures.
Characteristics of 472 Thoracentesis Procedures Described on Surveys of Traditionally Trained Residents, SBML‐Trained Residents, and Hospitalist Physicians
Traditionally Trained Resident Surveys, n = 182
SBML‐Trained Resident Surveys, n = 145
Hospitalist Physician Surveys, n = 145
P Value
NOTE: Abbreviations: IQR, interquartile range; IR, interventional radiology; SBML, simulation‐based mastery learning; SD, standard deviation. *Scale of 0 = not at all confident to 100 = very confident.
Bedside procedures, no. (%)
26 (14.3%)
32 (22.1%)
1 (0.7%)
<0.001
IR procedures, no. (%)
119 (65.4%)
74 (51.0%)
82 (56.6%)
0.029
Pulmonary procedures, no. (%)
37 (20.3%)
39 (26.9%)
62 (42.8%)
<0.001
Procedure self‐confidence, mean (SD)*
43.6 (28.66)
68.2 (25.17)
55.7 (31.17)
<0.001
Experience performing actual procedures, median (IQR)
1 (13)
2 (13.5)
10 (425)
<0.001
Traditionally trained residents were most likely to rate low confidence as reasons why they referred thoracenteses (Table 2). Hospitalist physicians were more likely to cite lack of time to perform the procedure themselves. Other reasons were different across groups. SBML‐trained residents were more likely to refer because of attending preference, whereas traditionally trained residents were mostly like to refer because of high risk/technically difficult cases.
Reasons Provided for Referral of 413 Thoracentesis Procedures Between Traditionally Trained Residents, SBML‐Trained Residents, and Hospitalist Physicians
Traditionally Trained Residents, n = 156
SBML‐Trained Residents, n = 113
Hospitalist Physicians, n = 144
P Value
NOTE: Abbreviations: IR, interventional radiology; SBML, simulation‐based mastery learning; SD, standard deviation. *Mean score on a 5‐point Likert scale (1 = not at all important, 5 = very important). Some expected counts are less than 5; 2 test may be invalid.
Lack of confidence to perform procedure, mean (SD)*
3.46 (1.32)
2.52 (1.45)
2.89 (1.60)
<0.001
Work hour restrictions, mean (SD) *
2.05 (1.37)
1.50 (1.11)
n/a
0.001
Low reimbursement, mean (SD)*
1.02 (0.12)
1.0 (0)
1.22 (0.69)
<0.001
Other reasons for referral, no. (%)
Attending preference
8 (5.1%)
11 (9.7%)
3 (2.1%)
0.025
Don't know how
6 (3.8%)
0
0
0.007
Failed bedside
0
2 (1.8%)
0
0.07
High risk/technically difficult case
24 (15.4%)
12 (10.6%)
5 (3.5%)
0.003
IR or pulmonary patient
5 (3.2%)
2 (1.8%)
4 (2.8%)
0.77
Other IR procedure taking place
11 (7.1%)
9 (8.0%)
4 (2.8%)
0.13
Patient preference
2 (1.3%)
7 (6.2%)
2 (3.5%)
0.024
Time
9 (5.8%)
7 (6.2%)
29 (20.1%)
<0.001
DISCUSSION
This study confirms earlier research showing that thoracentesis SBML improves residents' clinical skills, but is the first to use a randomized study design.[9] Use of the mastery model in health professions education ensures that all learners are competent to provide patient care including performing invasive procedures. Such rigorous education yields downstream translational outcomes including safety profiles comparable to experts.[1, 6]
This study also shows that SBML‐trained residents displayed higher self‐confidence and performed significantly more bedside procedures than traditionally trained residents and more experienced hospitalist physicians. Although the Society of Hospital Medicine considers thoracentesis skills a core competency for hospitalist physicians,[11] we speculate that some hospitalist physicians had not performed a thoracentesis in years. A recent national survey showed that only 44% of hospitalist physicians performed at least 1 thoracentesis within the past year.[10] Research also shows a shift in medical culture to refer procedures to specialty services, such as IR, by over 900% in the past 2 decades.[4] Our results provide novel information about procedure referrals because we show that SBML provides translational outcomes by improving skills and self‐confidence that influence referral patterns. SBML‐trained residents performed almost a quarter of procedures at the bedside. Although this only represents an 8% absolute difference in bedside procedures compared to traditionally trained residents, if a large number of residents are trained using SBML this results in a meaningful number of procedures shifted to the patient bedside. According to University HealthSystem Consortium data, in US teaching hospitals, approximately 35,325 thoracenteses are performed yearly.[1] Shifting even 8% of these procedures to the bedside would result in significant clinical benefit and cost savings. Reduced referrals increase additional bedside procedures that are safe, cost‐effective, and highly satisfying to patients.[1, 12, 13] Further study is required to determine the impact on referral patterns after providing SMBL training to attending physicians.
Our study also provides information about the rationale for procedure referrals. Earlier work speculates that financial incentive, training and time may explain high procedure referral rates.[10] One report on IM residents noted an 87% IR referral rate for thoracentesis, and confirmed that both training and time were major reasons.[14] Hospitalist physicians reported lack of time as the major factor leading to procedural referrals, which is problematic because bedside procedures yield similar clinical outcomes at lower costs.[1, 12] Attending preference also prevented 11 additional bedside procedures in the SBML‐trained group. Schedule adjustments and SBML training of hospitalist physicians should be considered, because bundled payments in the Affordable Care Act may favor shifting to the higher‐value approach of bedside thoracenteses.[15]
Our study has several limitations. First, we only performed surveys at 1 institution and the results may not be generalizable. Second, we relied on an electronic query to alert us to thoracenteses. Our query may have missed procedures that were unsuccessful or did not have EHR orders entered. Third, physicians may have been surveyed more than once for different or the same patient(s), but opinions may have shifted over time. Fourth, some items such as time needed to be written in the survey and were not specifically asked. This could have resulted in under‐reporting. Finally, we did not assess the clinical outcomes of thoracenteses in this study, although earlier work shows that residents who complete SBML have safety outcomes similar to IR.[1, 6]
In summary, IM residents who complete thoracentesis SBML demonstrate improved clinical skills and are more likely to perform bedside procedures. In an era of bundled payments, rethinking current care models to promote cost‐effective care is necessary. We believe providing additional education, training, and support to hospitalist physicians to promote bedside procedures is a promising strategy that warrants further study.
Acknowledgements
The authors acknowledge Drs. Douglas Vaughan and Kevin O'Leary for their support and encouragement of this work. The authors also thank the internal medicine residents at Northwestern for their dedication to patient care.
Disclosures: This project was supported by grant R18HS021202‐01 from the Agency for Healthcare Research and Quality (AHRQ). AHRQ had no role in the preparation, review, or approval of the manuscript. Trial Registration: ClinicalTrials.gov NCT01898247 (https://clinicaltrials.gov/ct2/show/NCT01898247?term=thoracentesis+and+simulation& rank=1). The authors report no conflicts of interest.
Internal medicine (IM) residents and hospitalist physicians commonly conduct bedside thoracenteses for both diagnostic and therapeutic purposes.[1] The American Board of Internal Medicine only requires that certification candidates understand the indications, complications, and management of thoracenteses.[2] A disconnect between clinical practice patterns and board requirements may increase patient risk because poorly trained physicians are more likely to cause complications.[3] National practice patterns show that many thoracenteses are referred to interventional radiology (IR).[4] However, research links performance of bedside procedures to reduced hospital length of stay and lower costs, without increasing risk of complications.[1, 5, 6]
Simulation‐based education offers a controlled environment where trainees improve procedural knowledge and skills without patient harm.[7] Simulation‐based mastery learning (SBML) is a rigorous form of competency‐based education that improves clinical skills and reduces iatrogenic complications and healthcare costs.[5, 6, 8] SBML also is an effective method to boost thoracentesis skills among IM residents.[9] However, there are no data to show that thoracentesis skills acquired in the simulation laboratory transfer to clinical environments and affect referral patterns.
We hypothesized that a thoracentesis SBML intervention would improve skills and increase procedural self‐confidence while reducing procedure referrals. This study aimed to (1) assess the effect of thoracentesis SBML on a cohort of IM residents' simulated skills and (2) compare traditionally trained (nonSBML‐trained) residents, SBML‐trained residents, and hospitalist physicians regarding procedure referral patterns, self‐confidence, procedure experience, and reasons for referral.
METHODS AND MATERIALS
Study Design
We surveyed physicians about thoracenteses performed on patients cared for by postgraduate year (PGY)‐2 and PGY‐3 IM residents and hospitalist physicians at Northwestern Memorial Hospital (NMH) from December 2012 to May 2015. NMH is an 896‐bed, tertiary academic medical center, located in Chicago, Illinois. A random sample of IM residents participated in a thoracentesis SBML intervention, whereas hospitalist physicians did not. We compared referral patterns, self‐confidence, procedure experience, and reasons for referral between traditionally trained residents, SBML‐trained residents, and hospitalist physicians. The Northwestern University Institutional Review Board approved this study, and all study participants provided informed consent.
At NMH, resident‐staffed services include general IM and nonintensive care subspecialty medical services. There are also 2 nonteaching floors staffed by hospitalist attending physicians without residents. Thoracenteses performed on these services can either be done at the bedside or referred to pulmonary medicine or IR. The majority of thoracenteses performed by pulmonary medicine occur at the patients' bedside, and the patients also receive a clinical consultation. IR procedures are done in the IR suite without additional clinical consultation.
Procedure
One hundred sixty residents were available for training over the study period. We randomly selected 20% of the approximately 20 PGY‐2 and PGY‐3 IM residents assigned to the NMH medicine services each month to participate in SBML thoracentesis training before their rotation. Randomly selected residents were required to undergo SBML training but were not required to participate in the study. This selection process was repeated before every rotation during the study period. This randomized wait‐list control method allowed residents to serve as controls if not initially selected for training and remain eligible for SBML training in subsequent rotations.
Intervention
The SBML intervention used a pretest/post‐test design, as described elsewhere.[9] Residents completed a clinical skills pretest on a thoracentesis simulator using a previously published 26‐item checklist.[9] Following the pretest, residents participated in 2, 1‐hour training sessions including a lecture, video, and deliberate practice on the simulator with feedback from an expert instructor. Finally, residents completed a clinical skills post‐test using the checklist within 1 week from training (but on a different day) and were required to meet or exceed an 84.3% minimum passing score (MPS). The entire training, including pre‐ and post‐tests, took approximately 3 hours to complete, and residents were given an additional 1 hour refresher training every 6 months for up to a year after original training. We compared pre‐ and post‐test checklist scores to evaluate skills improvement.
Thoracentesis Patient Identification
The NMH electronic health record (EHR) was used to identify medical service inpatients who underwent a thoracentesis during the study period. NMH clinicians must place an EHR order for procedure kits, consults, and laboratory analysis of thoracentesis fluid. We developed a real‐time query of NMH's EHR that identified all patients with electronic orders for thoracenteses and monitored this daily.
Physician Surveys
After each thoracentesis, we surveyed the PGY‐2 or PGY‐3 resident or hospitalist caring for the patient about the procedure. A research coordinator, blind to whether the resident received SBML, performed the surveys face‐to‐face on Monday to Friday during normal business hours. Residents were not considered SBML‐trained until they met or exceeded the MPS on the simulated skills checklist at post‐test. Surveys occurred on Monday for procedures performed on Friday evening through Sunday. Survey questions asked physicians about who performed the procedure, their procedural self‐confidence, and total number of thoracenteses performed in their career. For referred procedures, physicians were asked about reasons for referral including lack of confidence, work hour restrictions (residents only), and low reimbursement rates.[10] There was also an option to add other reasons.
Measurement
The thoracentesis skills checklist documented all required steps for an evidence‐based thoracentesis. Each task received equal weight (0 = done incorrectly/not done, 1 = done correctly).[9] For physician surveys, self‐confidence about performing the procedure was rated on a scale of 0 = not confident to 100 = very confident. Reasons for referral were scored on a Likert scale 1 to 5 (1 = not at all important, 5 = very important). Other reasons for referral were categorized.
Statistical Analysis
The clinical skills pre‐ and post‐test checklist scores were compared using a Wilcoxon matched pairs rank test. Physician survey data were compared between different procedure performers using the 2 test, independent t test, analysis of variance (ANOVA), or Kruskal‐Wallis test depending on data properties. Referral patterns measured by the Likert scale were averaged, and differences between physician groups were evaluated using ANOVA. Counts of other reasons for referral were compared using the 2 test. We performed all statistical analyses using IBM SPSS Statistics version 23 (IBM Corp., Armonk, NY).
RESULTS
Thoracentesis Clinical Skills
One hundred twelve (70%) residents were randomized to SBML, and all completed the protocol. Median pretest scores were 57.6% (interquartile range [IQR] 43.376.9), and final post‐test mastery scores were 96.2 (IQR 96.2100.0; P < 0.001). Twenty‐three residents (21.0%) failed to meet the MPS at initial post‐test, but met the MPS on retest after <1 hour of additional training.
Physician Surveys
The EHR query identified 474 procedures eligible for physician surveys. One hundred twenty‐two residents and 51 hospitalist physicians completed surveys for 472 procedures (99.6%); 182 patients by traditionally trained residents, 145 by SBML‐trained residents, and 145 by hospitalist physicians. As shown in Table 1, 413 (88%) of all procedures were referred to another service. Traditionally trained residents were more likely to refer to IR compared to SBML‐trained residents or hospitalist physicians. SBML‐trained residents were more likely to perform bedside procedures, whereas hospitalist physicians were most likely to refer to pulmonary medicine. SBML‐trained residents were most confident in their procedural skills, despite hospitalist physicians performing more actual procedures.
Characteristics of 472 Thoracentesis Procedures Described on Surveys of Traditionally Trained Residents, SBML‐Trained Residents, and Hospitalist Physicians
Traditionally Trained Resident Surveys, n = 182
SBML‐Trained Resident Surveys, n = 145
Hospitalist Physician Surveys, n = 145
P Value
NOTE: Abbreviations: IQR, interquartile range; IR, interventional radiology; SBML, simulation‐based mastery learning; SD, standard deviation. *Scale of 0 = not at all confident to 100 = very confident.
Bedside procedures, no. (%)
26 (14.3%)
32 (22.1%)
1 (0.7%)
<0.001
IR procedures, no. (%)
119 (65.4%)
74 (51.0%)
82 (56.6%)
0.029
Pulmonary procedures, no. (%)
37 (20.3%)
39 (26.9%)
62 (42.8%)
<0.001
Procedure self‐confidence, mean (SD)*
43.6 (28.66)
68.2 (25.17)
55.7 (31.17)
<0.001
Experience performing actual procedures, median (IQR)
1 (13)
2 (13.5)
10 (425)
<0.001
Traditionally trained residents were most likely to rate low confidence as reasons why they referred thoracenteses (Table 2). Hospitalist physicians were more likely to cite lack of time to perform the procedure themselves. Other reasons were different across groups. SBML‐trained residents were more likely to refer because of attending preference, whereas traditionally trained residents were mostly like to refer because of high risk/technically difficult cases.
Reasons Provided for Referral of 413 Thoracentesis Procedures Between Traditionally Trained Residents, SBML‐Trained Residents, and Hospitalist Physicians
Traditionally Trained Residents, n = 156
SBML‐Trained Residents, n = 113
Hospitalist Physicians, n = 144
P Value
NOTE: Abbreviations: IR, interventional radiology; SBML, simulation‐based mastery learning; SD, standard deviation. *Mean score on a 5‐point Likert scale (1 = not at all important, 5 = very important). Some expected counts are less than 5; 2 test may be invalid.
Lack of confidence to perform procedure, mean (SD)*
3.46 (1.32)
2.52 (1.45)
2.89 (1.60)
<0.001
Work hour restrictions, mean (SD) *
2.05 (1.37)
1.50 (1.11)
n/a
0.001
Low reimbursement, mean (SD)*
1.02 (0.12)
1.0 (0)
1.22 (0.69)
<0.001
Other reasons for referral, no. (%)
Attending preference
8 (5.1%)
11 (9.7%)
3 (2.1%)
0.025
Don't know how
6 (3.8%)
0
0
0.007
Failed bedside
0
2 (1.8%)
0
0.07
High risk/technically difficult case
24 (15.4%)
12 (10.6%)
5 (3.5%)
0.003
IR or pulmonary patient
5 (3.2%)
2 (1.8%)
4 (2.8%)
0.77
Other IR procedure taking place
11 (7.1%)
9 (8.0%)
4 (2.8%)
0.13
Patient preference
2 (1.3%)
7 (6.2%)
2 (3.5%)
0.024
Time
9 (5.8%)
7 (6.2%)
29 (20.1%)
<0.001
DISCUSSION
This study confirms earlier research showing that thoracentesis SBML improves residents' clinical skills, but is the first to use a randomized study design.[9] Use of the mastery model in health professions education ensures that all learners are competent to provide patient care including performing invasive procedures. Such rigorous education yields downstream translational outcomes including safety profiles comparable to experts.[1, 6]
This study also shows that SBML‐trained residents displayed higher self‐confidence and performed significantly more bedside procedures than traditionally trained residents and more experienced hospitalist physicians. Although the Society of Hospital Medicine considers thoracentesis skills a core competency for hospitalist physicians,[11] we speculate that some hospitalist physicians had not performed a thoracentesis in years. A recent national survey showed that only 44% of hospitalist physicians performed at least 1 thoracentesis within the past year.[10] Research also shows a shift in medical culture to refer procedures to specialty services, such as IR, by over 900% in the past 2 decades.[4] Our results provide novel information about procedure referrals because we show that SBML provides translational outcomes by improving skills and self‐confidence that influence referral patterns. SBML‐trained residents performed almost a quarter of procedures at the bedside. Although this only represents an 8% absolute difference in bedside procedures compared to traditionally trained residents, if a large number of residents are trained using SBML this results in a meaningful number of procedures shifted to the patient bedside. According to University HealthSystem Consortium data, in US teaching hospitals, approximately 35,325 thoracenteses are performed yearly.[1] Shifting even 8% of these procedures to the bedside would result in significant clinical benefit and cost savings. Reduced referrals increase additional bedside procedures that are safe, cost‐effective, and highly satisfying to patients.[1, 12, 13] Further study is required to determine the impact on referral patterns after providing SMBL training to attending physicians.
Our study also provides information about the rationale for procedure referrals. Earlier work speculates that financial incentive, training and time may explain high procedure referral rates.[10] One report on IM residents noted an 87% IR referral rate for thoracentesis, and confirmed that both training and time were major reasons.[14] Hospitalist physicians reported lack of time as the major factor leading to procedural referrals, which is problematic because bedside procedures yield similar clinical outcomes at lower costs.[1, 12] Attending preference also prevented 11 additional bedside procedures in the SBML‐trained group. Schedule adjustments and SBML training of hospitalist physicians should be considered, because bundled payments in the Affordable Care Act may favor shifting to the higher‐value approach of bedside thoracenteses.[15]
Our study has several limitations. First, we only performed surveys at 1 institution and the results may not be generalizable. Second, we relied on an electronic query to alert us to thoracenteses. Our query may have missed procedures that were unsuccessful or did not have EHR orders entered. Third, physicians may have been surveyed more than once for different or the same patient(s), but opinions may have shifted over time. Fourth, some items such as time needed to be written in the survey and were not specifically asked. This could have resulted in under‐reporting. Finally, we did not assess the clinical outcomes of thoracenteses in this study, although earlier work shows that residents who complete SBML have safety outcomes similar to IR.[1, 6]
In summary, IM residents who complete thoracentesis SBML demonstrate improved clinical skills and are more likely to perform bedside procedures. In an era of bundled payments, rethinking current care models to promote cost‐effective care is necessary. We believe providing additional education, training, and support to hospitalist physicians to promote bedside procedures is a promising strategy that warrants further study.
Acknowledgements
The authors acknowledge Drs. Douglas Vaughan and Kevin O'Leary for their support and encouragement of this work. The authors also thank the internal medicine residents at Northwestern for their dedication to patient care.
Disclosures: This project was supported by grant R18HS021202‐01 from the Agency for Healthcare Research and Quality (AHRQ). AHRQ had no role in the preparation, review, or approval of the manuscript. Trial Registration: ClinicalTrials.gov NCT01898247 (https://clinicaltrials.gov/ct2/show/NCT01898247?term=thoracentesis+and+simulation& rank=1). The authors report no conflicts of interest.
References
Kozmic SE, Wayne DB, Feinglass J, Hohmann SF, Barsuk JH. Thoracentesis procedures at university hospitals: comparing outcomes by specialty. Jt Comm J Qual Patient Saf.2015;42(1):34–40.
American Board of Internal Medicine. Internal medicine policies. Available at: http://www.abim.org/certification/policies/internal‐medicine‐subspecialty‐policies/internal‐medicine.aspx. Accessed March 9, 2016.
Gordon CE, Feller‐Kopman D, Balk EM, Smetana GW. Pneumothorax following thoracentesis: a systematic review and meta‐analysis. Arch Intern Med.2010;170(4):332–339.
Duszak R, Chatterjee AR, Schneider DA. National fluid shifts: fifteen‐year trends in paracentesis and thoracentesis procedures. J Am Coll Radiol.2010;7(11):859–864.
Barsuk JH, Cohen ER, Feinglass J, et al. Cost savings of performing paracentesis procedures at the bedside after simulation‐based education. Simul Healthc.2014;9(5):312–318.
Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Clinical outcomes after bedside and interventional radiology paracentesis procedures. Am J Med.2013;126(4):349–356.
Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA.1999;282(9):861–866.
Cohen ER, Feinglass J, Barsuk JH, et al. Cost savings from reduced catheter‐related bloodstream infection after simulation‐based education for residents in a medical intensive care unit. Simul Healthc.2010;5(2):98–102.
Wayne DB, Barsuk JH, O'Leary KJ, Fudala MJ, McGaghie WC. Mastery learning of thoracentesis skills by internal medicine residents using simulation technology and deliberate practice. J Hosp Med.2008;3(1):48–54.
Thakkar R, Wright SM, Alguire P, Wigton RS, Boonyasai RT. Procedures performed by hospitalist and non‐hospitalist general internists. J Gen Intern Med.2010;25(5):448–452.
Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med.2006;1(suppl 1):48–56.
Barsuk JH, Feinglass J, Kozmic SE, Hohmann SF, Ganger D, Wayne DB. Specialties performing paracentesis procedures at university hospitals: implications for training and certification. J Hosp Med.2014;9(3):162–168.
Barsuk JH, Kozmic SE, Scher J, Feinglass J, Hoyer A, Wayne DB. Are we providing patient‐centered care? Preferences about paracentesis and thoracentesis procedures. Patient Exp J.2014;1(2):94–103. Available at: http://pxjournal.org/cgi/viewcontent.cgi?article=1024
References
Kozmic SE, Wayne DB, Feinglass J, Hohmann SF, Barsuk JH. Thoracentesis procedures at university hospitals: comparing outcomes by specialty. Jt Comm J Qual Patient Saf.2015;42(1):34–40.
American Board of Internal Medicine. Internal medicine policies. Available at: http://www.abim.org/certification/policies/internal‐medicine‐subspecialty‐policies/internal‐medicine.aspx. Accessed March 9, 2016.
Gordon CE, Feller‐Kopman D, Balk EM, Smetana GW. Pneumothorax following thoracentesis: a systematic review and meta‐analysis. Arch Intern Med.2010;170(4):332–339.
Duszak R, Chatterjee AR, Schneider DA. National fluid shifts: fifteen‐year trends in paracentesis and thoracentesis procedures. J Am Coll Radiol.2010;7(11):859–864.
Barsuk JH, Cohen ER, Feinglass J, et al. Cost savings of performing paracentesis procedures at the bedside after simulation‐based education. Simul Healthc.2014;9(5):312–318.
Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Clinical outcomes after bedside and interventional radiology paracentesis procedures. Am J Med.2013;126(4):349–356.
Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA.1999;282(9):861–866.
Cohen ER, Feinglass J, Barsuk JH, et al. Cost savings from reduced catheter‐related bloodstream infection after simulation‐based education for residents in a medical intensive care unit. Simul Healthc.2010;5(2):98–102.
Wayne DB, Barsuk JH, O'Leary KJ, Fudala MJ, McGaghie WC. Mastery learning of thoracentesis skills by internal medicine residents using simulation technology and deliberate practice. J Hosp Med.2008;3(1):48–54.
Thakkar R, Wright SM, Alguire P, Wigton RS, Boonyasai RT. Procedures performed by hospitalist and non‐hospitalist general internists. J Gen Intern Med.2010;25(5):448–452.
Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med.2006;1(suppl 1):48–56.
Barsuk JH, Feinglass J, Kozmic SE, Hohmann SF, Ganger D, Wayne DB. Specialties performing paracentesis procedures at university hospitals: implications for training and certification. J Hosp Med.2014;9(3):162–168.
Barsuk JH, Kozmic SE, Scher J, Feinglass J, Hoyer A, Wayne DB. Are we providing patient‐centered care? Preferences about paracentesis and thoracentesis procedures. Patient Exp J.2014;1(2):94–103. Available at: http://pxjournal.org/cgi/viewcontent.cgi?article=1024
Address for correspondence and reprint requests: Jeffrey H. Barsuk, MD, Division of Hospital Medicine; 211 E. Ontario Street, Suite 717, Chicago, IL 60611; Telephone: 312‐926‐3680; Fax: 312‐926‐4588; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Heart failure is a frequent cause of hospital admission in the United States, with an estimated cost of $31 billion dollars per year.[1] Discharging a patient with heart failure requires a multidisciplinary approach that includes anticipating a discharge date, scheduling follow‐up, reconciling medications, assessing home‐care or placement needs, and delivering patient education.[2, 3] Comprehensive transitional care interventions reduce readmissions and mortality.[2] Individually tailored and structured discharge plans decrease length of stay and readmissions.[3] The Centers for Medicare and Medicaid Services recently proposed that discharge planning begin within 24 hours of inpatient admissions,[4] despite inadequate data surrounding the optimal time to begin discharge planning.[3] In addition to enabling transitional care, identifying patients vulnerable to extended hospitalization aids in risk stratification, as prolonged length of stay is associated with increased risk of readmission and mortality.[5, 6]
Physicians are not able to accurately prognosticate whether patients will experience short‐term outcomes such as readmissions or mortality.[7, 8] Likewise, physicians do not predict length of stay accurately for heterogeneous patient populations,[9, 10, 11] even on the morning prior to anticipated discharge.[12] Prediction accuracy for patients admitted with heart failure, however, has not been adequately studied. The objectives of this study were to measure the accuracy of inpatient physicians' early predictions of length of stay for patients admitted with heart failure and to determine whether level of experience improved accuracy.
METHODS
In this prospective, observational study, we measured physicians' predictions of length of stay for patients admitted to a heart failure teaching service at an academic tertiary care hospital. Three resident/emntern teams rotate admitting responsibilities every 3 days, supervised by 1 attending cardiologist. Patients admitted overnight may be admitted independently by the on‐call resident without intern collaboration.
All physicians staffing our center's heart failure teaching service between August 1, 2013 and November 19, 2013 were recruited, and consecutively admitted adult patients were included. Patients were excluded if they did not have any cardiac diagnosis or if still admitted at study completion in February 2014. Deceased patients' time of death was counted as discharge.
Interns, residents, and attending cardiologists were interviewed independently within 24 hours of admission and asked to predict length of stay. Interns and residents were interviewed prior to rounds, and attendings thereafter. Electronic medical records were reviewed to determine date and time of admission and discharge, demographics, clinical variables, and discharge diagnoses.
The primary outcome was accuracy of predictions of length of stay stratified by level of experience. Based on prior pilot data, at 80% power and significance level () of 0.05, we estimated that predictions were needed on 100 patients to detect a 2‐day difference between actual and predicted length of stay.
Student t tests were used to compare the difference between predicted and actual length of stay for each level of training. Analysis of variance (ANOVA) was used to compare accuracy of prediction by training level. Generalized estimating equation (GEE) modeling was applied to compare predictions among interns, residents, and attending cardiologists, accounting for clustering by individual physician. GEE models were adjusted for study week in a sensitivity analysis to determine if predictions improved over time.
Analysis was performed using SAS 9.3 (SAS Institute Inc., Cary, NC) and R 2.14 (The R Foundation for Statistical Computing, Vienna, Austria). Institutional review board approval was granted, and physicians provided informed consent. All authors had access to primary data devoid of protected health information.
RESULTS
In total, 22 interns (<6 months experience), 25 residents (13 years experience), and 8 attending cardiologists (mean 19 9.7 years experience) were studied. Predictions were performed on 171 consecutively admitted patients. Five patients had noncardiac diagnoses and 1 patient remained admitted, leaving 165 patients for analysis. Predictions were made by all 3 physician levels on 98 patients. There were 67 patients with incomplete predictions as a result of 63 intern, 13 attending, and 4 resident predictions that were unobtainable. Absent intern data predominantly resulted from night shift admissions. Remaining missing data were due to time‐sensitive physician tasks that interfered with physician interviews.
Patient characteristics are described in Table 1. Physicians provided 415 predictions on 165 patients, 157 (95%) of whom survived to hospital discharge. Mean and median lengths of stay were 10.9 and 8 days (interquartile range [IQR], 4 to 13). Mean intern (N = 102), resident (N = 161), and attending (N = 152) predictions were 5.4 days (95% confidence interval [CI]: 4.6 to 6.2), 6.6 days (95% CI: 5.8 to 7.4) and 7.2 days (95% CI: 6.4 to 7.9), respectively. Median intern, resident, and attending predictions were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10). Mean differences between predicted and actual length of stay for interns, residents and attendings were 9 days (95% CI: 8.2 to 3.6), 4.3 days (95% C: 6.0 to 2.7), and 3.5 days (95% CI: 5.1 to 2.0). The mean difference between predicted and actual length of stay was statistically significant for all groups (P < 0.0001). Median intern, resident, and attending differences between predicted and actual were 2 days (IQR, 7 to 0), 2 days (IQR, 7 to 0), and 1 day (IQR, 5 to 1), respectively. Predictions correlated poorly with actual length of stay (R2 = 0.11).
Patient Characteristics
Patients, N = 165 (%)
NOTE: Patient characteristics are for all included patients. Percentages may not add up to 100% due to rounding. Abbreviations: ADLS, Activities of Daily Living; EF, ejection fraction; HF, heart failure; IADLS, Instrumental Activities of Daily Living; NYHA, New York Heart Association. *Patients with heart transplants were categorized unknown if no NYHA class was documented.
Male
105 (63%)
Age
57 16 years
White
99 (60%)
Black
52 (31%)
Asian, Hispanic, other, unknown
16 (9%)
HF classification
HF with a reduced EF (EF 40%)
106(64%)
HF mixed/undefined (EF 41%49%)
14 (8%)
HF with a preserved EF (EF 50%)
20 (12%)
Right heart failure only
5 (3%)
Heart transplant cardiac complications
20 (12%)
Severity of illness on admission
NYHA class I
9 (5%)
NYHA class II
25 (15%)
NYHA class III
67 (41%)
NYHA class IV
32 (19%)
NYHA class unknown*
32 (19%)
Mean no. of home medications prior to admission
13 6
On intravenous inotropes prior to admission
18 (11%)
On mechanical circulatory support prior to admission
15 (9%)
Status postheart transplant
20 (12%)
Invasive hemodynamic monitoring within 24 hours
94 (57%)
Type of admission
Admitted through emergency department
71 (43%)
Admitted from clinic
35 (21%)
Transferred from other acute care hospitals
56 (34%)
Admitted from skilled nursing or rehabilitation facility
3 (2%)
Social history
Lived alone prior to admission
32 (19%)
Prison/homeless/facility/unknown living situation
8 (5%)
Required assistance for IADLS/ADLS prior to admission
29 (17%)
Home health services initiated prior to admission
42 (25%)
Prior admission history
No known admissions in the prior year
70 (42%)
1 admission in the prior year
37 (22%)
2 admissions in the prior year
21 (13%)
310 admissions in the prior year
36 (22%)
Unknown readmission status
1 (1%)
Readmitted patients
Readmitted within 30 days
38 (23%)
Readmitted within 7 days
13 (8%)
Ninety‐eight patients (59%) received predictions from physicians at all 3 experience levels. Mean and median lengths of stay were 11.3 days and 7.5 days (IQR, 4 to 13). Concordant with the entire cohort, median intern, resident, and attending predictions for these patients were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10), respectively. Differences between predicted and actual length of stay were statistically significant for all groups: the mean difference for interns, residents, and attendings was 5.8 days (95% CI: 8.2 to 3.4, P < 0.0001), 4.6 days (95% CI: 7.1 to 2.0, P = 0.0001), and 4.3 days (95% CI: 6.5 to 2.1, P = 0.0003), respectively (Figure 1).
Figure 1
Actual length of stay versus physicians' predictions (n = 98). Mean LOS (days) of all patients for whom there was a prediction made by all 3 physicians on the team. Predictions were significantly less than actual LOS for interns, residents, and attending cardiologists (P < 0.0001, P = 0.0001, P = 0.0003, respectively). There were no significant differences among predictions made by interns, residents, and attending cardiologists (P = 0.61). Abbreviations: LOS, length of stay.
There are differences among providers with improved prediction as level of experience increased, but this is not statistically significant as determined by ANOVA (p=0.64) or by GEE modeling to account for clustering of predictions by physician (P = 0.61). Analysis that adjusted for study week yielded similar results. Thus, experience did not improve accuracy.
DISCUSSION
We prospectively measured accuracy of physicians' length of stay predictions of heart failure patients and compared accuracy by experience level. All physicians underestimated length of stay, with average differences between 3.5 and 6 days. Most notably, level of experience did not improve accuracy. Although we anticipated that experience would improve prediction, our findings are not compatible with this hypothesis. Future studies of factors affecting length of stay predictions would help to better understand our findings.
Our results are consistent with small, single‐center studies of different patient and physician cohorts. Hulter Asberg found that internists at a hospital were unable to predict whether a patient would remain admitted 10 days or more, with poor interobserver reliability.[9] Mak et al. demonstrated that emergency physicians underestimated length of stay by an average of 2 days when predicting length of stay on a broad spectrum of patients in an emergency department.[10] Physician predictions of length of stay have been found to be inaccurate in a center's oncologic intensive care unit population.[11] Sullivan et al. found that academic general medicine physicians predicted discharge with 27% sensitivity the morning prior to next‐day discharge, which improved significantly to 67% by the afternoon, concluding that physicians can provide meaningful discharge predictions the afternoon prior to next‐day discharge.[12] By focusing on patients with heart failure, a major driver of hospitalization and readmission, and comparing providers by level of experience, we augment this existing body of work.
In addition to identifying patients at risk for readmission and mortality,[5, 6] accurate discharge prediction may improve safety of weekend discharges and patient satisfaction. Heart failure patients discharged on weekends receive less complete discharge instructions,[13] suffer higher mortality, and are readmitted more frequently than those discharged on weekdays.[14] Early and accurate predictions may enhance interventions targeting patients with anticipated weekend discharges. Furthermore, inadequate communication regarding anticipated discharge timing is a source of patient dissatisfaction,[15] and accurate prediction of discharge, if shared with patients, may improve patient satisfaction.
Limitations of our study include that it was a single‐center study at a large academic tertiary care hospital with predictions assessed on a teaching service. Severity of illness of this cohort may be a barrier to generalizability, and physicians may predict prognosis of healthier patients more accurately. We recorded predictions at the time of admission, and did not assess whether accuracy improved closer to discharge. We did not collect predictions from non‐physician team members. Sample size and absent data regarding the causes of prolonged hospitalization prohibited an analyses of variables associated with prediction inaccuracy.
CONCLUSIONS
Physicians do not accurately forecast heart failure patients' length of stay at the time of admission, and level of experience does not improve accuracy. Future studies are warranted to determine whether predictions closer to discharge, by an interdisciplinary team, or with assistance of risk‐prediction models are more accurate than physician predictions at admission, and whether early identification of patients at risk for prolonged hospitalization improves outcomes. Ultimately, early and accurate length of stay forecasts may improve risk stratification, patient satisfaction, and discharge planning, and reduce adverse outcomes related to at‐risk discharges.
Acknowledgements
The authors acknowledge Katherine R Courtright, MD, for her gracious assistance with statistical analysis.
Heidenreich PA, Albert NM, Allen LA, et al. Forecasting the impact of heart failure in the United States: a policy statement from the American Heart Association. Circ Heart Fail.2013;6:606–619.
Kansagara D, Chiovaro JC, Kagen D, et al. So many options, where do we start? An overview of the care transitions literature. J Hosp Med.2016;;11(3):221–230.
Goncalves‐Bradley DC, Lannin NA, Clemson LM, Cameron ID, Shepperd S. Discharge planning from hospital. Cochrane Database Syst Rev.2016;1:CD000313.
Department of Health and Human Services. Centers for Medicare and Medicaid Services. 42 CFR Parts 482, 484, 485 Medicare and Medicaid programs; revisions to requirements for discharge planning for hospitals, critical access hospitals, and home health agencies; proposed rule. Fed Regist.2015:80(212): 68126–68155.
Au A, McAlister FA, Bakal JA, Ezekowitz J, Kaul P, Walraven C. Predicting the risk of unplanned readmission or death within 30 days of discharge after a heart failure hospitalization. Am Heart J.2012;164:365–372.
Cotter G, Davison BA, Milo O, et al. Predictors and associations with outcomes of length of hospital stay in patients with acute heart failure: results from VERITAS20 [published online December 22, 2015]. J Card Fail. doi: 10.1016/j.cardfail.2015.12.017.
Allaudeen N, Schnipper JL, Orav EJ, Wachter RM, Vidyarthi AR. Inability of providers to predict unplanned readmissions. J Gen Intern Med.2011;26(7):771–776.
Yamokoski LM, Hasselblad V, Moser DK, et al. Prediction of rehospitalization and death in severe heart failure by physicians and nurses of the ESCAPE trial. J Card Fail.2007;13(1):8–13.
Hulter Asberg K. Physicians' outcome predictions for elderly patients. Survival, hospital discharge, and length of stay in a department of internal medicine. Scand J Soc Med.1986;14(3):127–132.
Mak G, Grant WD, McKenzie JC, McCabe JB. Physicians' ability to predict hospital length of stay for patients admitted to the hospital from the emergency department. Emerg Med Int.2012;2012:824674.
Nassar AP, Caruso P. ICU physicians are unable to accurately predict length of stay at admission: a prospective study. Int J Qual Health Care.2016;28(1):99–103.
Sullivan B, Ming B, Boggan JC, et al. An evaluation of physician predictions of discharge on a general medicine service. J Hosp Med.2015;10(12) 808–810.
Horwich TB, Hernandez AF, Liang L, et al. Weekend hospital admission and discharge for heart failure: association with quality of care and clinical outcomes. Am Heart J.2009;158(3):451–458.
McAlister FA, Au AG, Majumdar SR, Youngson E, Padwal RS. Postdischarge outcomes in heart failure are better for teaching hospitals and weekday discharges. Circ Heart Fail.2013;6(5):922–929.
Manning DM, Tammel KJ, Blegen RN, et al. In‐room display of day and time patient is anticipated to leave hospital: a “discharge appointment.” J Hosp Med.2007;2(1):13–16.
Heart failure is a frequent cause of hospital admission in the United States, with an estimated cost of $31 billion dollars per year.[1] Discharging a patient with heart failure requires a multidisciplinary approach that includes anticipating a discharge date, scheduling follow‐up, reconciling medications, assessing home‐care or placement needs, and delivering patient education.[2, 3] Comprehensive transitional care interventions reduce readmissions and mortality.[2] Individually tailored and structured discharge plans decrease length of stay and readmissions.[3] The Centers for Medicare and Medicaid Services recently proposed that discharge planning begin within 24 hours of inpatient admissions,[4] despite inadequate data surrounding the optimal time to begin discharge planning.[3] In addition to enabling transitional care, identifying patients vulnerable to extended hospitalization aids in risk stratification, as prolonged length of stay is associated with increased risk of readmission and mortality.[5, 6]
Physicians are not able to accurately prognosticate whether patients will experience short‐term outcomes such as readmissions or mortality.[7, 8] Likewise, physicians do not predict length of stay accurately for heterogeneous patient populations,[9, 10, 11] even on the morning prior to anticipated discharge.[12] Prediction accuracy for patients admitted with heart failure, however, has not been adequately studied. The objectives of this study were to measure the accuracy of inpatient physicians' early predictions of length of stay for patients admitted with heart failure and to determine whether level of experience improved accuracy.
METHODS
In this prospective, observational study, we measured physicians' predictions of length of stay for patients admitted to a heart failure teaching service at an academic tertiary care hospital. Three resident/emntern teams rotate admitting responsibilities every 3 days, supervised by 1 attending cardiologist. Patients admitted overnight may be admitted independently by the on‐call resident without intern collaboration.
All physicians staffing our center's heart failure teaching service between August 1, 2013 and November 19, 2013 were recruited, and consecutively admitted adult patients were included. Patients were excluded if they did not have any cardiac diagnosis or if still admitted at study completion in February 2014. Deceased patients' time of death was counted as discharge.
Interns, residents, and attending cardiologists were interviewed independently within 24 hours of admission and asked to predict length of stay. Interns and residents were interviewed prior to rounds, and attendings thereafter. Electronic medical records were reviewed to determine date and time of admission and discharge, demographics, clinical variables, and discharge diagnoses.
The primary outcome was accuracy of predictions of length of stay stratified by level of experience. Based on prior pilot data, at 80% power and significance level () of 0.05, we estimated that predictions were needed on 100 patients to detect a 2‐day difference between actual and predicted length of stay.
Student t tests were used to compare the difference between predicted and actual length of stay for each level of training. Analysis of variance (ANOVA) was used to compare accuracy of prediction by training level. Generalized estimating equation (GEE) modeling was applied to compare predictions among interns, residents, and attending cardiologists, accounting for clustering by individual physician. GEE models were adjusted for study week in a sensitivity analysis to determine if predictions improved over time.
Analysis was performed using SAS 9.3 (SAS Institute Inc., Cary, NC) and R 2.14 (The R Foundation for Statistical Computing, Vienna, Austria). Institutional review board approval was granted, and physicians provided informed consent. All authors had access to primary data devoid of protected health information.
RESULTS
In total, 22 interns (<6 months experience), 25 residents (13 years experience), and 8 attending cardiologists (mean 19 9.7 years experience) were studied. Predictions were performed on 171 consecutively admitted patients. Five patients had noncardiac diagnoses and 1 patient remained admitted, leaving 165 patients for analysis. Predictions were made by all 3 physician levels on 98 patients. There were 67 patients with incomplete predictions as a result of 63 intern, 13 attending, and 4 resident predictions that were unobtainable. Absent intern data predominantly resulted from night shift admissions. Remaining missing data were due to time‐sensitive physician tasks that interfered with physician interviews.
Patient characteristics are described in Table 1. Physicians provided 415 predictions on 165 patients, 157 (95%) of whom survived to hospital discharge. Mean and median lengths of stay were 10.9 and 8 days (interquartile range [IQR], 4 to 13). Mean intern (N = 102), resident (N = 161), and attending (N = 152) predictions were 5.4 days (95% confidence interval [CI]: 4.6 to 6.2), 6.6 days (95% CI: 5.8 to 7.4) and 7.2 days (95% CI: 6.4 to 7.9), respectively. Median intern, resident, and attending predictions were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10). Mean differences between predicted and actual length of stay for interns, residents and attendings were 9 days (95% CI: 8.2 to 3.6), 4.3 days (95% C: 6.0 to 2.7), and 3.5 days (95% CI: 5.1 to 2.0). The mean difference between predicted and actual length of stay was statistically significant for all groups (P < 0.0001). Median intern, resident, and attending differences between predicted and actual were 2 days (IQR, 7 to 0), 2 days (IQR, 7 to 0), and 1 day (IQR, 5 to 1), respectively. Predictions correlated poorly with actual length of stay (R2 = 0.11).
Patient Characteristics
Patients, N = 165 (%)
NOTE: Patient characteristics are for all included patients. Percentages may not add up to 100% due to rounding. Abbreviations: ADLS, Activities of Daily Living; EF, ejection fraction; HF, heart failure; IADLS, Instrumental Activities of Daily Living; NYHA, New York Heart Association. *Patients with heart transplants were categorized unknown if no NYHA class was documented.
Male
105 (63%)
Age
57 16 years
White
99 (60%)
Black
52 (31%)
Asian, Hispanic, other, unknown
16 (9%)
HF classification
HF with a reduced EF (EF 40%)
106(64%)
HF mixed/undefined (EF 41%49%)
14 (8%)
HF with a preserved EF (EF 50%)
20 (12%)
Right heart failure only
5 (3%)
Heart transplant cardiac complications
20 (12%)
Severity of illness on admission
NYHA class I
9 (5%)
NYHA class II
25 (15%)
NYHA class III
67 (41%)
NYHA class IV
32 (19%)
NYHA class unknown*
32 (19%)
Mean no. of home medications prior to admission
13 6
On intravenous inotropes prior to admission
18 (11%)
On mechanical circulatory support prior to admission
15 (9%)
Status postheart transplant
20 (12%)
Invasive hemodynamic monitoring within 24 hours
94 (57%)
Type of admission
Admitted through emergency department
71 (43%)
Admitted from clinic
35 (21%)
Transferred from other acute care hospitals
56 (34%)
Admitted from skilled nursing or rehabilitation facility
3 (2%)
Social history
Lived alone prior to admission
32 (19%)
Prison/homeless/facility/unknown living situation
8 (5%)
Required assistance for IADLS/ADLS prior to admission
29 (17%)
Home health services initiated prior to admission
42 (25%)
Prior admission history
No known admissions in the prior year
70 (42%)
1 admission in the prior year
37 (22%)
2 admissions in the prior year
21 (13%)
310 admissions in the prior year
36 (22%)
Unknown readmission status
1 (1%)
Readmitted patients
Readmitted within 30 days
38 (23%)
Readmitted within 7 days
13 (8%)
Ninety‐eight patients (59%) received predictions from physicians at all 3 experience levels. Mean and median lengths of stay were 11.3 days and 7.5 days (IQR, 4 to 13). Concordant with the entire cohort, median intern, resident, and attending predictions for these patients were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10), respectively. Differences between predicted and actual length of stay were statistically significant for all groups: the mean difference for interns, residents, and attendings was 5.8 days (95% CI: 8.2 to 3.4, P < 0.0001), 4.6 days (95% CI: 7.1 to 2.0, P = 0.0001), and 4.3 days (95% CI: 6.5 to 2.1, P = 0.0003), respectively (Figure 1).
Figure 1
Actual length of stay versus physicians' predictions (n = 98). Mean LOS (days) of all patients for whom there was a prediction made by all 3 physicians on the team. Predictions were significantly less than actual LOS for interns, residents, and attending cardiologists (P < 0.0001, P = 0.0001, P = 0.0003, respectively). There were no significant differences among predictions made by interns, residents, and attending cardiologists (P = 0.61). Abbreviations: LOS, length of stay.
There are differences among providers with improved prediction as level of experience increased, but this is not statistically significant as determined by ANOVA (p=0.64) or by GEE modeling to account for clustering of predictions by physician (P = 0.61). Analysis that adjusted for study week yielded similar results. Thus, experience did not improve accuracy.
DISCUSSION
We prospectively measured accuracy of physicians' length of stay predictions of heart failure patients and compared accuracy by experience level. All physicians underestimated length of stay, with average differences between 3.5 and 6 days. Most notably, level of experience did not improve accuracy. Although we anticipated that experience would improve prediction, our findings are not compatible with this hypothesis. Future studies of factors affecting length of stay predictions would help to better understand our findings.
Our results are consistent with small, single‐center studies of different patient and physician cohorts. Hulter Asberg found that internists at a hospital were unable to predict whether a patient would remain admitted 10 days or more, with poor interobserver reliability.[9] Mak et al. demonstrated that emergency physicians underestimated length of stay by an average of 2 days when predicting length of stay on a broad spectrum of patients in an emergency department.[10] Physician predictions of length of stay have been found to be inaccurate in a center's oncologic intensive care unit population.[11] Sullivan et al. found that academic general medicine physicians predicted discharge with 27% sensitivity the morning prior to next‐day discharge, which improved significantly to 67% by the afternoon, concluding that physicians can provide meaningful discharge predictions the afternoon prior to next‐day discharge.[12] By focusing on patients with heart failure, a major driver of hospitalization and readmission, and comparing providers by level of experience, we augment this existing body of work.
In addition to identifying patients at risk for readmission and mortality,[5, 6] accurate discharge prediction may improve safety of weekend discharges and patient satisfaction. Heart failure patients discharged on weekends receive less complete discharge instructions,[13] suffer higher mortality, and are readmitted more frequently than those discharged on weekdays.[14] Early and accurate predictions may enhance interventions targeting patients with anticipated weekend discharges. Furthermore, inadequate communication regarding anticipated discharge timing is a source of patient dissatisfaction,[15] and accurate prediction of discharge, if shared with patients, may improve patient satisfaction.
Limitations of our study include that it was a single‐center study at a large academic tertiary care hospital with predictions assessed on a teaching service. Severity of illness of this cohort may be a barrier to generalizability, and physicians may predict prognosis of healthier patients more accurately. We recorded predictions at the time of admission, and did not assess whether accuracy improved closer to discharge. We did not collect predictions from non‐physician team members. Sample size and absent data regarding the causes of prolonged hospitalization prohibited an analyses of variables associated with prediction inaccuracy.
CONCLUSIONS
Physicians do not accurately forecast heart failure patients' length of stay at the time of admission, and level of experience does not improve accuracy. Future studies are warranted to determine whether predictions closer to discharge, by an interdisciplinary team, or with assistance of risk‐prediction models are more accurate than physician predictions at admission, and whether early identification of patients at risk for prolonged hospitalization improves outcomes. Ultimately, early and accurate length of stay forecasts may improve risk stratification, patient satisfaction, and discharge planning, and reduce adverse outcomes related to at‐risk discharges.
Acknowledgements
The authors acknowledge Katherine R Courtright, MD, for her gracious assistance with statistical analysis.
Disclosure: Nothing to report
Heart failure is a frequent cause of hospital admission in the United States, with an estimated cost of $31 billion dollars per year.[1] Discharging a patient with heart failure requires a multidisciplinary approach that includes anticipating a discharge date, scheduling follow‐up, reconciling medications, assessing home‐care or placement needs, and delivering patient education.[2, 3] Comprehensive transitional care interventions reduce readmissions and mortality.[2] Individually tailored and structured discharge plans decrease length of stay and readmissions.[3] The Centers for Medicare and Medicaid Services recently proposed that discharge planning begin within 24 hours of inpatient admissions,[4] despite inadequate data surrounding the optimal time to begin discharge planning.[3] In addition to enabling transitional care, identifying patients vulnerable to extended hospitalization aids in risk stratification, as prolonged length of stay is associated with increased risk of readmission and mortality.[5, 6]
Physicians are not able to accurately prognosticate whether patients will experience short‐term outcomes such as readmissions or mortality.[7, 8] Likewise, physicians do not predict length of stay accurately for heterogeneous patient populations,[9, 10, 11] even on the morning prior to anticipated discharge.[12] Prediction accuracy for patients admitted with heart failure, however, has not been adequately studied. The objectives of this study were to measure the accuracy of inpatient physicians' early predictions of length of stay for patients admitted with heart failure and to determine whether level of experience improved accuracy.
METHODS
In this prospective, observational study, we measured physicians' predictions of length of stay for patients admitted to a heart failure teaching service at an academic tertiary care hospital. Three resident/emntern teams rotate admitting responsibilities every 3 days, supervised by 1 attending cardiologist. Patients admitted overnight may be admitted independently by the on‐call resident without intern collaboration.
All physicians staffing our center's heart failure teaching service between August 1, 2013 and November 19, 2013 were recruited, and consecutively admitted adult patients were included. Patients were excluded if they did not have any cardiac diagnosis or if still admitted at study completion in February 2014. Deceased patients' time of death was counted as discharge.
Interns, residents, and attending cardiologists were interviewed independently within 24 hours of admission and asked to predict length of stay. Interns and residents were interviewed prior to rounds, and attendings thereafter. Electronic medical records were reviewed to determine date and time of admission and discharge, demographics, clinical variables, and discharge diagnoses.
The primary outcome was accuracy of predictions of length of stay stratified by level of experience. Based on prior pilot data, at 80% power and significance level () of 0.05, we estimated that predictions were needed on 100 patients to detect a 2‐day difference between actual and predicted length of stay.
Student t tests were used to compare the difference between predicted and actual length of stay for each level of training. Analysis of variance (ANOVA) was used to compare accuracy of prediction by training level. Generalized estimating equation (GEE) modeling was applied to compare predictions among interns, residents, and attending cardiologists, accounting for clustering by individual physician. GEE models were adjusted for study week in a sensitivity analysis to determine if predictions improved over time.
Analysis was performed using SAS 9.3 (SAS Institute Inc., Cary, NC) and R 2.14 (The R Foundation for Statistical Computing, Vienna, Austria). Institutional review board approval was granted, and physicians provided informed consent. All authors had access to primary data devoid of protected health information.
RESULTS
In total, 22 interns (<6 months experience), 25 residents (13 years experience), and 8 attending cardiologists (mean 19 9.7 years experience) were studied. Predictions were performed on 171 consecutively admitted patients. Five patients had noncardiac diagnoses and 1 patient remained admitted, leaving 165 patients for analysis. Predictions were made by all 3 physician levels on 98 patients. There were 67 patients with incomplete predictions as a result of 63 intern, 13 attending, and 4 resident predictions that were unobtainable. Absent intern data predominantly resulted from night shift admissions. Remaining missing data were due to time‐sensitive physician tasks that interfered with physician interviews.
Patient characteristics are described in Table 1. Physicians provided 415 predictions on 165 patients, 157 (95%) of whom survived to hospital discharge. Mean and median lengths of stay were 10.9 and 8 days (interquartile range [IQR], 4 to 13). Mean intern (N = 102), resident (N = 161), and attending (N = 152) predictions were 5.4 days (95% confidence interval [CI]: 4.6 to 6.2), 6.6 days (95% CI: 5.8 to 7.4) and 7.2 days (95% CI: 6.4 to 7.9), respectively. Median intern, resident, and attending predictions were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10). Mean differences between predicted and actual length of stay for interns, residents and attendings were 9 days (95% CI: 8.2 to 3.6), 4.3 days (95% C: 6.0 to 2.7), and 3.5 days (95% CI: 5.1 to 2.0). The mean difference between predicted and actual length of stay was statistically significant for all groups (P < 0.0001). Median intern, resident, and attending differences between predicted and actual were 2 days (IQR, 7 to 0), 2 days (IQR, 7 to 0), and 1 day (IQR, 5 to 1), respectively. Predictions correlated poorly with actual length of stay (R2 = 0.11).
Patient Characteristics
Patients, N = 165 (%)
NOTE: Patient characteristics are for all included patients. Percentages may not add up to 100% due to rounding. Abbreviations: ADLS, Activities of Daily Living; EF, ejection fraction; HF, heart failure; IADLS, Instrumental Activities of Daily Living; NYHA, New York Heart Association. *Patients with heart transplants were categorized unknown if no NYHA class was documented.
Male
105 (63%)
Age
57 16 years
White
99 (60%)
Black
52 (31%)
Asian, Hispanic, other, unknown
16 (9%)
HF classification
HF with a reduced EF (EF 40%)
106(64%)
HF mixed/undefined (EF 41%49%)
14 (8%)
HF with a preserved EF (EF 50%)
20 (12%)
Right heart failure only
5 (3%)
Heart transplant cardiac complications
20 (12%)
Severity of illness on admission
NYHA class I
9 (5%)
NYHA class II
25 (15%)
NYHA class III
67 (41%)
NYHA class IV
32 (19%)
NYHA class unknown*
32 (19%)
Mean no. of home medications prior to admission
13 6
On intravenous inotropes prior to admission
18 (11%)
On mechanical circulatory support prior to admission
15 (9%)
Status postheart transplant
20 (12%)
Invasive hemodynamic monitoring within 24 hours
94 (57%)
Type of admission
Admitted through emergency department
71 (43%)
Admitted from clinic
35 (21%)
Transferred from other acute care hospitals
56 (34%)
Admitted from skilled nursing or rehabilitation facility
3 (2%)
Social history
Lived alone prior to admission
32 (19%)
Prison/homeless/facility/unknown living situation
8 (5%)
Required assistance for IADLS/ADLS prior to admission
29 (17%)
Home health services initiated prior to admission
42 (25%)
Prior admission history
No known admissions in the prior year
70 (42%)
1 admission in the prior year
37 (22%)
2 admissions in the prior year
21 (13%)
310 admissions in the prior year
36 (22%)
Unknown readmission status
1 (1%)
Readmitted patients
Readmitted within 30 days
38 (23%)
Readmitted within 7 days
13 (8%)
Ninety‐eight patients (59%) received predictions from physicians at all 3 experience levels. Mean and median lengths of stay were 11.3 days and 7.5 days (IQR, 4 to 13). Concordant with the entire cohort, median intern, resident, and attending predictions for these patients were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10), respectively. Differences between predicted and actual length of stay were statistically significant for all groups: the mean difference for interns, residents, and attendings was 5.8 days (95% CI: 8.2 to 3.4, P < 0.0001), 4.6 days (95% CI: 7.1 to 2.0, P = 0.0001), and 4.3 days (95% CI: 6.5 to 2.1, P = 0.0003), respectively (Figure 1).
Figure 1
Actual length of stay versus physicians' predictions (n = 98). Mean LOS (days) of all patients for whom there was a prediction made by all 3 physicians on the team. Predictions were significantly less than actual LOS for interns, residents, and attending cardiologists (P < 0.0001, P = 0.0001, P = 0.0003, respectively). There were no significant differences among predictions made by interns, residents, and attending cardiologists (P = 0.61). Abbreviations: LOS, length of stay.
There are differences among providers with improved prediction as level of experience increased, but this is not statistically significant as determined by ANOVA (p=0.64) or by GEE modeling to account for clustering of predictions by physician (P = 0.61). Analysis that adjusted for study week yielded similar results. Thus, experience did not improve accuracy.
DISCUSSION
We prospectively measured accuracy of physicians' length of stay predictions of heart failure patients and compared accuracy by experience level. All physicians underestimated length of stay, with average differences between 3.5 and 6 days. Most notably, level of experience did not improve accuracy. Although we anticipated that experience would improve prediction, our findings are not compatible with this hypothesis. Future studies of factors affecting length of stay predictions would help to better understand our findings.
Our results are consistent with small, single‐center studies of different patient and physician cohorts. Hulter Asberg found that internists at a hospital were unable to predict whether a patient would remain admitted 10 days or more, with poor interobserver reliability.[9] Mak et al. demonstrated that emergency physicians underestimated length of stay by an average of 2 days when predicting length of stay on a broad spectrum of patients in an emergency department.[10] Physician predictions of length of stay have been found to be inaccurate in a center's oncologic intensive care unit population.[11] Sullivan et al. found that academic general medicine physicians predicted discharge with 27% sensitivity the morning prior to next‐day discharge, which improved significantly to 67% by the afternoon, concluding that physicians can provide meaningful discharge predictions the afternoon prior to next‐day discharge.[12] By focusing on patients with heart failure, a major driver of hospitalization and readmission, and comparing providers by level of experience, we augment this existing body of work.
In addition to identifying patients at risk for readmission and mortality,[5, 6] accurate discharge prediction may improve safety of weekend discharges and patient satisfaction. Heart failure patients discharged on weekends receive less complete discharge instructions,[13] suffer higher mortality, and are readmitted more frequently than those discharged on weekdays.[14] Early and accurate predictions may enhance interventions targeting patients with anticipated weekend discharges. Furthermore, inadequate communication regarding anticipated discharge timing is a source of patient dissatisfaction,[15] and accurate prediction of discharge, if shared with patients, may improve patient satisfaction.
Limitations of our study include that it was a single‐center study at a large academic tertiary care hospital with predictions assessed on a teaching service. Severity of illness of this cohort may be a barrier to generalizability, and physicians may predict prognosis of healthier patients more accurately. We recorded predictions at the time of admission, and did not assess whether accuracy improved closer to discharge. We did not collect predictions from non‐physician team members. Sample size and absent data regarding the causes of prolonged hospitalization prohibited an analyses of variables associated with prediction inaccuracy.
CONCLUSIONS
Physicians do not accurately forecast heart failure patients' length of stay at the time of admission, and level of experience does not improve accuracy. Future studies are warranted to determine whether predictions closer to discharge, by an interdisciplinary team, or with assistance of risk‐prediction models are more accurate than physician predictions at admission, and whether early identification of patients at risk for prolonged hospitalization improves outcomes. Ultimately, early and accurate length of stay forecasts may improve risk stratification, patient satisfaction, and discharge planning, and reduce adverse outcomes related to at‐risk discharges.
Acknowledgements
The authors acknowledge Katherine R Courtright, MD, for her gracious assistance with statistical analysis.
Disclosure: Nothing to report
References
Heidenreich PA, Albert NM, Allen LA, et al. Forecasting the impact of heart failure in the United States: a policy statement from the American Heart Association. Circ Heart Fail.2013;6:606–619.
Kansagara D, Chiovaro JC, Kagen D, et al. So many options, where do we start? An overview of the care transitions literature. J Hosp Med.2016;;11(3):221–230.
Goncalves‐Bradley DC, Lannin NA, Clemson LM, Cameron ID, Shepperd S. Discharge planning from hospital. Cochrane Database Syst Rev.2016;1:CD000313.
Department of Health and Human Services. Centers for Medicare and Medicaid Services. 42 CFR Parts 482, 484, 485 Medicare and Medicaid programs; revisions to requirements for discharge planning for hospitals, critical access hospitals, and home health agencies; proposed rule. Fed Regist.2015:80(212): 68126–68155.
Au A, McAlister FA, Bakal JA, Ezekowitz J, Kaul P, Walraven C. Predicting the risk of unplanned readmission or death within 30 days of discharge after a heart failure hospitalization. Am Heart J.2012;164:365–372.
Cotter G, Davison BA, Milo O, et al. Predictors and associations with outcomes of length of hospital stay in patients with acute heart failure: results from VERITAS20 [published online December 22, 2015]. J Card Fail. doi: 10.1016/j.cardfail.2015.12.017.
Allaudeen N, Schnipper JL, Orav EJ, Wachter RM, Vidyarthi AR. Inability of providers to predict unplanned readmissions. J Gen Intern Med.2011;26(7):771–776.
Yamokoski LM, Hasselblad V, Moser DK, et al. Prediction of rehospitalization and death in severe heart failure by physicians and nurses of the ESCAPE trial. J Card Fail.2007;13(1):8–13.
Hulter Asberg K. Physicians' outcome predictions for elderly patients. Survival, hospital discharge, and length of stay in a department of internal medicine. Scand J Soc Med.1986;14(3):127–132.
Mak G, Grant WD, McKenzie JC, McCabe JB. Physicians' ability to predict hospital length of stay for patients admitted to the hospital from the emergency department. Emerg Med Int.2012;2012:824674.
Nassar AP, Caruso P. ICU physicians are unable to accurately predict length of stay at admission: a prospective study. Int J Qual Health Care.2016;28(1):99–103.
Sullivan B, Ming B, Boggan JC, et al. An evaluation of physician predictions of discharge on a general medicine service. J Hosp Med.2015;10(12) 808–810.
Horwich TB, Hernandez AF, Liang L, et al. Weekend hospital admission and discharge for heart failure: association with quality of care and clinical outcomes. Am Heart J.2009;158(3):451–458.
McAlister FA, Au AG, Majumdar SR, Youngson E, Padwal RS. Postdischarge outcomes in heart failure are better for teaching hospitals and weekday discharges. Circ Heart Fail.2013;6(5):922–929.
Manning DM, Tammel KJ, Blegen RN, et al. In‐room display of day and time patient is anticipated to leave hospital: a “discharge appointment.” J Hosp Med.2007;2(1):13–16.
References
Heidenreich PA, Albert NM, Allen LA, et al. Forecasting the impact of heart failure in the United States: a policy statement from the American Heart Association. Circ Heart Fail.2013;6:606–619.
Kansagara D, Chiovaro JC, Kagen D, et al. So many options, where do we start? An overview of the care transitions literature. J Hosp Med.2016;;11(3):221–230.
Goncalves‐Bradley DC, Lannin NA, Clemson LM, Cameron ID, Shepperd S. Discharge planning from hospital. Cochrane Database Syst Rev.2016;1:CD000313.
Department of Health and Human Services. Centers for Medicare and Medicaid Services. 42 CFR Parts 482, 484, 485 Medicare and Medicaid programs; revisions to requirements for discharge planning for hospitals, critical access hospitals, and home health agencies; proposed rule. Fed Regist.2015:80(212): 68126–68155.
Au A, McAlister FA, Bakal JA, Ezekowitz J, Kaul P, Walraven C. Predicting the risk of unplanned readmission or death within 30 days of discharge after a heart failure hospitalization. Am Heart J.2012;164:365–372.
Cotter G, Davison BA, Milo O, et al. Predictors and associations with outcomes of length of hospital stay in patients with acute heart failure: results from VERITAS20 [published online December 22, 2015]. J Card Fail. doi: 10.1016/j.cardfail.2015.12.017.
Allaudeen N, Schnipper JL, Orav EJ, Wachter RM, Vidyarthi AR. Inability of providers to predict unplanned readmissions. J Gen Intern Med.2011;26(7):771–776.
Yamokoski LM, Hasselblad V, Moser DK, et al. Prediction of rehospitalization and death in severe heart failure by physicians and nurses of the ESCAPE trial. J Card Fail.2007;13(1):8–13.
Hulter Asberg K. Physicians' outcome predictions for elderly patients. Survival, hospital discharge, and length of stay in a department of internal medicine. Scand J Soc Med.1986;14(3):127–132.
Mak G, Grant WD, McKenzie JC, McCabe JB. Physicians' ability to predict hospital length of stay for patients admitted to the hospital from the emergency department. Emerg Med Int.2012;2012:824674.
Nassar AP, Caruso P. ICU physicians are unable to accurately predict length of stay at admission: a prospective study. Int J Qual Health Care.2016;28(1):99–103.
Sullivan B, Ming B, Boggan JC, et al. An evaluation of physician predictions of discharge on a general medicine service. J Hosp Med.2015;10(12) 808–810.
Horwich TB, Hernandez AF, Liang L, et al. Weekend hospital admission and discharge for heart failure: association with quality of care and clinical outcomes. Am Heart J.2009;158(3):451–458.
McAlister FA, Au AG, Majumdar SR, Youngson E, Padwal RS. Postdischarge outcomes in heart failure are better for teaching hospitals and weekday discharges. Circ Heart Fail.2013;6(5):922–929.
Manning DM, Tammel KJ, Blegen RN, et al. In‐room display of day and time patient is anticipated to leave hospital: a “discharge appointment.” J Hosp Med.2007;2(1):13–16.
Address for correspondence and reprint requests: Stephen Kimmel, MD, Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania School of Medicine, 923 Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104‐6021; Telephone: 215‐898‐1740; Fax: 215‐573‐3106; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Inappropriate antimicrobial use in hospitalized patients is a well‐recognized driver for the development of drug‐resistant organisms and antimicrobial‐related complications such as Clostridium difficile infection (CDI).[1, 2] Infection with C difficile affects nearly 500,000 people annually resulting in higher healthcare expenditures, longer lengths of hospital stay, and nearly 15,000 deaths.[3] Data from the Centers for Disease Control and Prevention (CDC) suggest that a 30% reduction in the use of broad‐spectrum antimicrobials, or a 5% reduction in the proportion of hospitalized patients receiving antimicrobials, could equate to a 26% reduction in CDI.[4] It is estimated that up to 50% of antimicrobial use in the hospital setting may be inappropriate.[5]
Since the Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America published guidelines for developing formal, hospital‐based antimicrobial stewardship programs in 2007, stewardship practices have been adapted by frontline providers to fit day‐to‐day inpatient care.[5] A recent review by Hamilton et al. described several studies in which stewardship practices were imbedded into daily workflows by way of checklists, education reminders, and periodic review of antimicrobial usage, as well as a multicenter pilot of point‐of‐care stewardship interventions successfully implemented by various providers including nursing, pharmacists, and hospitalists.[6]
In response to the CDC's 2010 Get Smart for Healthcare campaign, which focused on stemming antimicrobial resistance and improving antimicrobial use, the Institute for Healthcare Improvement (IHI), in partnership with the CDC, brought together experts in the field to identify practical and feasible target practices for hospital‐based stewardship and created a Driver Diagram to guide implementation efforts (Figure 1). Rohde et al. described the initial pilot testing of these practices, the decision to more actively engage frontline providers, and the 3 key strategies identified as high‐yield improvement targets: enhancing the visibility of antimicrobial use at the point of care, creating easily accessible antimicrobial guidelines for common infections, and the implementation of a 72‐hour timeout after initiation of antimicrobials.[7]
Figure 1
Shown is the Antibiotic Stewardship Driver Diagram that was developed as part of the Centers for Disease Control and Prevention (CDC) and Institute for Healthcare Improvement partnered efforts to stem antimicrobial overuse through the CDC's Get Smart for Healthcare campaign. Eight pilot hospitals were recruited to participate in field testing and to refine the diagram in a variety of settings from September 2011 through June 2012.
In this article, we describe how, in partnership with the IHI and the CDC, the hospital medicine programs at 5 diverse hospitals iteratively tested these 3 strategies with a goal of identifying the barriers and facilitators to effective hospitalist‐led antimicrobial stewardship.
METHODS
Representatives from 5 hospital medicine programs, IHI, and the CDC attended a kick‐off meeting at the CDC in November 2012 to discuss the 3 proposed strategies, examples of prior testing, and ideas for implementation. Each hospitalist provided a high‐level summary of the current state of stewardship efforts at their respective institutions, identified possible future states related to the improvement strategies, and anticipated problems in achieving them. The 3 key strategies are described below.
Improved Documentation/Visibility at Points of Care
Making antimicrobial indication, day of therapy, and anticipated duration transparent in the medical record was the targeted improvement strategy to avoid unnecessary antimicrobial days that can result from provider uncertainty, particularly during patient handoffs. Daily hospitalist documentation was identified as a vehicle through which these aspects of antimicrobial use could be effectively communicated and propagated from provider to provider.
Stewardship educational sessions and/or awareness campaigns were hospitalist led, and were accompanied by follow‐up reminders in the forms of emails, texts, flyers, or conferences. Infectious disease physicians were not directly involved in education but were available for consultation if needed.
Improved Guideline Clarity and Accessibility
Enhancing the availability of guidelines for frequently encountered infections and clarifying key guideline recommendations such as treatment duration were identified as the improvement strategies to help make treatment regimens more appropriate and consistent across providers.
Interventions included designing simplified pocket cards for commonly encountered infections, (see Supporting Information, Appendix A, in the online version of this article), collaborating with infectious disease physicians on guideline development, and dissemination through email, smartphone, and wall flyers, and creation of a continuous medical education module focused on stewardship practices.
72‐Hour Antimicrobial Timeout
The 72‐hour antimicrobial timeout required that hospitalists routinely reassess antimicrobial use 72 hours following antimicrobial initiation, a time when most pertinent culture data had returned. Hospitalists partnered with clinical pharmacists at all sites, and addressed the following questions during each timeout: (1) Does the patient have a condition that requires continued use of antimicrobials? (2) Can the current antimicrobial regimen be tailored based on culture data? (3) What is the anticipated treatment duration? A variety of modifications occurred during timeouts, including broadening or narrowing the antimicrobial regimen based on culture data, switching to an oral antimicrobial, adjusting dose or frequency based on patient‐specific factors, as well as discontinuation of antimicrobials. Following the initial timeout, further adjustments were made as the clinical situation dictated; intermittent partnered timeouts continued during a patient's hospitalization on an individualized basis. Hospitalists were encouraged to independently review new diagnostic information daily and make changes as needed outside the dedicated time‐out sessions. All decisions to adjust antimicrobial regimens were provider driven; no hospitals employed automated antimicrobial discontinuation without provider input.
Implementation and Evaluation
Each site was tasked with conducting small tests of change aimed at implementing at least 1, and ideally all 3 strategies. Small, reasonably achievable interventions were preferred to large hospital‐wide initiatives so that key barriers and facilitators to the change could be quickly identified and addressed.
Methods of data collection varied across institutions and included anonymous physician survey, face‐to‐face physician interviews, and medical record review. Evaluations of hospital‐specific interventions utilized convenience samples to obtain real time, actionable data. Postintervention data were distributed through biweekly calls and compiled at the conclusion of the project. Barriers and facilitators of hospitalist‐centered antimicrobial stewardship collected over the course of the project were reviewed and used to identify common themes.
RESULTS
Participating hospitals included 1 community nonteaching hospital, 2 community teaching hospitals, and 2 academic medical centers. All hospitals used computerized order entry and had prior quality improvement experience; 4 out of 5 hospitals used electronic medical records. Postintervention data on antimicrobial documentation and timeouts were compiled, shared, and successes identified. For example, 2 hospitals saw an increase in complete antimicrobial documentation from 4% and 8% to 51% and 65%, respectively, of medical records reviewed over a 3‐month period. Additionally, cumulative timeout data across all hospitals showed that out of 726 antimicrobial timeouts evaluated, optimization or discontinuation occurred 218 times or 30% of the time.
Each site's key implementation barriers and facilitators were collected. Examples were compiled and common themes emerged (Table 1).
Common Themes of Barriers and Facilitators to Antimicrobial Stewardship Within Each Hospitalist Program With Accompanying Examples
NOTE: Barriers and facilitators were collected during biweekly conference calls as well as upon conclusion of our initiative.
Barriers: What impediments did we experience during our stewardship project?
Schedule and practice variability
Physician variability in structure of antimicrobial documentation
Prescribing etiquette: it's difficult to change course of treatment plan started by a colleague
Competing schedule demands of hospitalist and pharmacist
Skepticism of antimicrobial stewardship importance
Perception of incorporating stewardship practices into daily work as time consuming
Improvement project fatigue from competing quality improvement initiatives
Unclear leadership buy‐in
Focusing too broadly
Choosing large initial interventions, which take significant time/effort to complete and quantify
Setting unrealistic expectations (eg, expecting perfect adherence to documentation, guidelines, or timeout)
Facilitators: What countermeasures did we target to overcome barriers?
Engage the hospitalists
Establish a core part of the hospitalist group as stewardship champions
Speak 1‐on‐1 to colleagues about specific goals and ways to achieve them
Establish buy‐in from leadership
Encourage participation from a multidisciplinary team (eg, bedside nursing, clinical pharmacists)
Collect real time data and feedback
Utilize a data collection tool if possible/engage hospital coders to identify appropriate diagnoses
Define your question and identify baseline data prior to intervention
Give rapid cycle feedback to colleagues that can impact antimicrobial prescribing in real time
Recognize and reward high performers
Limit scope
Start with small, quickly implementable interventions
Identify interventions that are easy to integrate into hospitalist workflow
DISCUSSION
We successfully brought together hospitalists from diverse institutions to undertake small tests of change aimed at 3 key antimicrobial use improvement strategies. Following our interventions, significant improvement in antimicrobial documentation occurred at 2 institutions focusing on this improvement strategy, and 72‐hour timeouts performed across all hospitals tailored antimicrobial use in 30% of the sessions. Through frequent collaborative discussions and information sharing, we were able to identify common barriers and facilitators to hospitalist‐centered stewardship efforts.
Each participating hospital medicine program noticed a gradual shift in thinking among their colleagues, from initial skepticism about embedding stewardship within their daily workflow, to general acceptance that it was a worthwhile and meaningful endeavor. We posited that this transition in belief and behavior evolved for several reasons. First, each group was educated about their own, personal prescribing practices from the outset rather than presenting abstract data. This allowed for ownership of the problem and buy‐in to improve it. Second, participants were able to experience the benefits at an individual level while the interventions were ongoing (eg, having other providers reciprocate structured documentation during patient handoffs, making antimicrobial plans clearer), reinforcing the achievability of stewardship practices within each group. Additionally, we focused on making small, manageable interventions that did not seem disruptive to hospitalists' daily workflow. For example, 1 group instituted antimicrobial timeouts during preexisting multidisciplinary rounds with clinical pharmacists. Last, project champions had both leadership and frontline roles within their groups and set the example for stewardship practices, which conveyed that this was a priority at the leadership level. These findings are in line with those of Charani et al., who evaluated behavior change strategies that influence antimicrobial prescribing in acute care. The authors found that behavioral determinants and social norms strongly influence prescribing practices in acute care, and that antimicrobial stewardship improvement projects should account for these influences.[8]
We also identified several barriers to antimicrobial stewardship implementation (Table 1) and proposed measures to address these barriers in future improvement efforts. For example, hospital medicine programs without a preexisting clinical pharmacy partnership asked hospitalist leadership for more direct clinical pharmacy involvement, recognizing the importance of a physician‐pharmacy alliance for stewardship efforts. To more effectively embed antimicrobial stewardship into daily routine, several hospitalists suggested standardized order sets for commonly encountered infections, as well as routine feedback on prescribing practices. Furthermore, although our simplified antimicrobial guideline pocket card enhanced access to this information, several colleagues suggested a smart phone application that would make access even easier and less cumbersome. Last, given the concern about the sustainability of antimicrobial stewardship initiatives, we recommended periodic reminders, random medical record review, and re‐education if necessary on our 3 strategies and their purpose.
Our study is not without limitations. Each participating hospitalist group enacted hospital‐specific interventions based on individual hospitalist program needs and goals, and although there was collective discussion, no group was tasked to undertake another group's initiative, thereby limiting generalizability. We did, however, identify common facilitators that could be adapted to a wide variety of hospitalist programs. We also note that our 3 main strategies were included in a recent review of quality indicators for measuring the success of antimicrobial stewardship programs; thus, although details of individual practice may vary, in principle these concepts can help identify areas for improvement within each unique stewardship program.[9] Importantly, we were unable to evaluate the impact of the 3 key improvement strategies on important clinical outcomes such as overall antimicrobial use, complications including CDI, and cost. However, others have found that improvement strategies similar to our 3 key processes are associated with meaningful improvements in clinical outcomes as well as reductions in healthcare costs.[10, 11] Last, long‐ term impact and sustainability were not evaluated. By choosing interventions that were viewed by frontline providers as valuable and attainable, however, we feel that each group will likely continue current practices beyond the initial evaluation timeframe.
Although these 5 hospitalist groups were able to successfully implement several aspects of the 3 key improvement strategies, we recognize that this is only the first step. Further effort is needed to quantify the impact of these improvement efforts on objective patient outcomes such as readmissions, length of stay, and antimicrobial‐related complications, which will better inform our local and national leaders on the inherent clinical and financial gains associated with hospitalist‐led stewardship work. Finally, creative ways to better integrate stewardship activities into existing provider workflows (eg, decision support and automation) will further accelerate improvement efforts.
In summary, hospitalists at 5 diverse institutions successfully implemented key antimicrobial improvement strategies and identified important implementation facilitators and barriers. Future efforts at hospitalist‐led stewardship should focus on strategies to scale‐up interventions and evaluate their impact on clinical outcomes and cost.
Acknowledgements
The authors thank Latoya Kuhn, MPH, for her assistance with statistical analyses. We also thank the clinical pharmacists at each institution for their partnership in stewardship efforts: Patrick Arnold, PharmD, and Matthew Tupps, PharmD, MHA, from University of Michigan Hospital and Health System; and Roland Tam, PharmD, from Emory Johns Creek Hospital.
Disclosures: Dr. Flanders reports consulting fees or honoraria from the Institute for Healthcare Improvement, has provided consultancy to the Society of Hospital Medicine, has served as a reviewer for expert testimony, received honoraria as a visiting lecturer to various hospitals, and has received royalties from publisher John Wiley & Sons. He has also received grant funding from Blue Cross Blue Shield of Michigan and the Agency for Healthcare Research and Quality. Dr. Ko reports consultancy for the American Hospital Association and the Society of Hospital Medicine involving work with catheter‐associated urinary tract infections. Ms. Jacobsen reports grant funding from the Institute for Healthcare Improvement. Dr. Rosenberg reports consultancy for Bristol‐Myers Squibb, Forest Pharmaceuticals, and Pfizer. The funding source for this collaborative was through the Institute for Healthcare Improvement and Centers for Disease Control and Prevention. Funding was provided by the Department of Health and Human Services, the Centers for Disease Control and Prevention, the National Center for Emerging Zoonotic and Infectious Diseases, and the Division of Healthcare Quality Promotion/Office of the Director. Avaris Concepts served as the prime contractor and the Institute for Healthcare Improvement as the subcontractor for the initiative. The findings and conclusions in this report represent the views of the authors and might not reflect the views of the Centers for Disease Control and Prevention. The authors report no conflicts of interest.
Maragakis LL, Perencevich EN, Cosgrove SE. Clinical and economic burden of antimicrobial resistance. Expert Rev Anti Infect Ther.2008;6(5):751–763.
Roberts RR, Hota B, Ahmad I, et al. Hospital and societal costs of antimicrobial‐resistant infections in a Chicago teaching hospital: implications for antibiotic stewardship. Clin Infect Dis.2009;49(8):1175–1184.
Lessa FC, Mu Y, Bamberg WM, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med.2015;372(9):825–834.
Fridkin S, Baggs J, Fagan R, et al.; Centers for Disease Control and Prevention (CDC). Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep.2014;63(9):194–200.
Dellit TH1, Owens RC, McGowan JE, et al.; Infectious Diseases Society of America; Society for Healthcare Epidemiology of America. Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship. Clin Infect Dis.2007;44(2):159–177.
Hamilton KW, Gerber JS, Moehring R, et al.; Centers for Disease Control and Prevention Epicenters Program. Point‐of‐prescription interventions to improve antimicrobial stewardship. Clin Infect Dis.2015;60(8):1252–1258.
Rohde JM, Jacobsen D, Rosenberg DJ. Role of the hospitalist in antimicrobial stewardship: a review of work completed and description of a multisite collaborative. Clin Ther.2013;35(6):751–757.
Charani E, Edwards R, Sevdalis N, et al. Behavior change strategies to influence antimicrobial prescribing in acute care: a systematic review. Clin Infect Dis.2011;53(7):651–662.
Bosch , Geerlings SE, Natsch S, Prins JM, Hulscher ME. Quality indicators to measure appropriate antibiotic use in hospitalized adults. Clin Infect Dis.2015;60(2):281–291.
Bosso JA, Drew RH. Application of antimicrobial stewardship to optimise management of community acquired pneumonia. Int J Clin Pract.2011;65(7):775–783.
Davey P, Brown E, Charani E, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev.2013;4:CD003543.
Inappropriate antimicrobial use in hospitalized patients is a well‐recognized driver for the development of drug‐resistant organisms and antimicrobial‐related complications such as Clostridium difficile infection (CDI).[1, 2] Infection with C difficile affects nearly 500,000 people annually resulting in higher healthcare expenditures, longer lengths of hospital stay, and nearly 15,000 deaths.[3] Data from the Centers for Disease Control and Prevention (CDC) suggest that a 30% reduction in the use of broad‐spectrum antimicrobials, or a 5% reduction in the proportion of hospitalized patients receiving antimicrobials, could equate to a 26% reduction in CDI.[4] It is estimated that up to 50% of antimicrobial use in the hospital setting may be inappropriate.[5]
Since the Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America published guidelines for developing formal, hospital‐based antimicrobial stewardship programs in 2007, stewardship practices have been adapted by frontline providers to fit day‐to‐day inpatient care.[5] A recent review by Hamilton et al. described several studies in which stewardship practices were imbedded into daily workflows by way of checklists, education reminders, and periodic review of antimicrobial usage, as well as a multicenter pilot of point‐of‐care stewardship interventions successfully implemented by various providers including nursing, pharmacists, and hospitalists.[6]
In response to the CDC's 2010 Get Smart for Healthcare campaign, which focused on stemming antimicrobial resistance and improving antimicrobial use, the Institute for Healthcare Improvement (IHI), in partnership with the CDC, brought together experts in the field to identify practical and feasible target practices for hospital‐based stewardship and created a Driver Diagram to guide implementation efforts (Figure 1). Rohde et al. described the initial pilot testing of these practices, the decision to more actively engage frontline providers, and the 3 key strategies identified as high‐yield improvement targets: enhancing the visibility of antimicrobial use at the point of care, creating easily accessible antimicrobial guidelines for common infections, and the implementation of a 72‐hour timeout after initiation of antimicrobials.[7]
Figure 1
Shown is the Antibiotic Stewardship Driver Diagram that was developed as part of the Centers for Disease Control and Prevention (CDC) and Institute for Healthcare Improvement partnered efforts to stem antimicrobial overuse through the CDC's Get Smart for Healthcare campaign. Eight pilot hospitals were recruited to participate in field testing and to refine the diagram in a variety of settings from September 2011 through June 2012.
In this article, we describe how, in partnership with the IHI and the CDC, the hospital medicine programs at 5 diverse hospitals iteratively tested these 3 strategies with a goal of identifying the barriers and facilitators to effective hospitalist‐led antimicrobial stewardship.
METHODS
Representatives from 5 hospital medicine programs, IHI, and the CDC attended a kick‐off meeting at the CDC in November 2012 to discuss the 3 proposed strategies, examples of prior testing, and ideas for implementation. Each hospitalist provided a high‐level summary of the current state of stewardship efforts at their respective institutions, identified possible future states related to the improvement strategies, and anticipated problems in achieving them. The 3 key strategies are described below.
Improved Documentation/Visibility at Points of Care
Making antimicrobial indication, day of therapy, and anticipated duration transparent in the medical record was the targeted improvement strategy to avoid unnecessary antimicrobial days that can result from provider uncertainty, particularly during patient handoffs. Daily hospitalist documentation was identified as a vehicle through which these aspects of antimicrobial use could be effectively communicated and propagated from provider to provider.
Stewardship educational sessions and/or awareness campaigns were hospitalist led, and were accompanied by follow‐up reminders in the forms of emails, texts, flyers, or conferences. Infectious disease physicians were not directly involved in education but were available for consultation if needed.
Improved Guideline Clarity and Accessibility
Enhancing the availability of guidelines for frequently encountered infections and clarifying key guideline recommendations such as treatment duration were identified as the improvement strategies to help make treatment regimens more appropriate and consistent across providers.
Interventions included designing simplified pocket cards for commonly encountered infections, (see Supporting Information, Appendix A, in the online version of this article), collaborating with infectious disease physicians on guideline development, and dissemination through email, smartphone, and wall flyers, and creation of a continuous medical education module focused on stewardship practices.
72‐Hour Antimicrobial Timeout
The 72‐hour antimicrobial timeout required that hospitalists routinely reassess antimicrobial use 72 hours following antimicrobial initiation, a time when most pertinent culture data had returned. Hospitalists partnered with clinical pharmacists at all sites, and addressed the following questions during each timeout: (1) Does the patient have a condition that requires continued use of antimicrobials? (2) Can the current antimicrobial regimen be tailored based on culture data? (3) What is the anticipated treatment duration? A variety of modifications occurred during timeouts, including broadening or narrowing the antimicrobial regimen based on culture data, switching to an oral antimicrobial, adjusting dose or frequency based on patient‐specific factors, as well as discontinuation of antimicrobials. Following the initial timeout, further adjustments were made as the clinical situation dictated; intermittent partnered timeouts continued during a patient's hospitalization on an individualized basis. Hospitalists were encouraged to independently review new diagnostic information daily and make changes as needed outside the dedicated time‐out sessions. All decisions to adjust antimicrobial regimens were provider driven; no hospitals employed automated antimicrobial discontinuation without provider input.
Implementation and Evaluation
Each site was tasked with conducting small tests of change aimed at implementing at least 1, and ideally all 3 strategies. Small, reasonably achievable interventions were preferred to large hospital‐wide initiatives so that key barriers and facilitators to the change could be quickly identified and addressed.
Methods of data collection varied across institutions and included anonymous physician survey, face‐to‐face physician interviews, and medical record review. Evaluations of hospital‐specific interventions utilized convenience samples to obtain real time, actionable data. Postintervention data were distributed through biweekly calls and compiled at the conclusion of the project. Barriers and facilitators of hospitalist‐centered antimicrobial stewardship collected over the course of the project were reviewed and used to identify common themes.
RESULTS
Participating hospitals included 1 community nonteaching hospital, 2 community teaching hospitals, and 2 academic medical centers. All hospitals used computerized order entry and had prior quality improvement experience; 4 out of 5 hospitals used electronic medical records. Postintervention data on antimicrobial documentation and timeouts were compiled, shared, and successes identified. For example, 2 hospitals saw an increase in complete antimicrobial documentation from 4% and 8% to 51% and 65%, respectively, of medical records reviewed over a 3‐month period. Additionally, cumulative timeout data across all hospitals showed that out of 726 antimicrobial timeouts evaluated, optimization or discontinuation occurred 218 times or 30% of the time.
Each site's key implementation barriers and facilitators were collected. Examples were compiled and common themes emerged (Table 1).
Common Themes of Barriers and Facilitators to Antimicrobial Stewardship Within Each Hospitalist Program With Accompanying Examples
NOTE: Barriers and facilitators were collected during biweekly conference calls as well as upon conclusion of our initiative.
Barriers: What impediments did we experience during our stewardship project?
Schedule and practice variability
Physician variability in structure of antimicrobial documentation
Prescribing etiquette: it's difficult to change course of treatment plan started by a colleague
Competing schedule demands of hospitalist and pharmacist
Skepticism of antimicrobial stewardship importance
Perception of incorporating stewardship practices into daily work as time consuming
Improvement project fatigue from competing quality improvement initiatives
Unclear leadership buy‐in
Focusing too broadly
Choosing large initial interventions, which take significant time/effort to complete and quantify
Setting unrealistic expectations (eg, expecting perfect adherence to documentation, guidelines, or timeout)
Facilitators: What countermeasures did we target to overcome barriers?
Engage the hospitalists
Establish a core part of the hospitalist group as stewardship champions
Speak 1‐on‐1 to colleagues about specific goals and ways to achieve them
Establish buy‐in from leadership
Encourage participation from a multidisciplinary team (eg, bedside nursing, clinical pharmacists)
Collect real time data and feedback
Utilize a data collection tool if possible/engage hospital coders to identify appropriate diagnoses
Define your question and identify baseline data prior to intervention
Give rapid cycle feedback to colleagues that can impact antimicrobial prescribing in real time
Recognize and reward high performers
Limit scope
Start with small, quickly implementable interventions
Identify interventions that are easy to integrate into hospitalist workflow
DISCUSSION
We successfully brought together hospitalists from diverse institutions to undertake small tests of change aimed at 3 key antimicrobial use improvement strategies. Following our interventions, significant improvement in antimicrobial documentation occurred at 2 institutions focusing on this improvement strategy, and 72‐hour timeouts performed across all hospitals tailored antimicrobial use in 30% of the sessions. Through frequent collaborative discussions and information sharing, we were able to identify common barriers and facilitators to hospitalist‐centered stewardship efforts.
Each participating hospital medicine program noticed a gradual shift in thinking among their colleagues, from initial skepticism about embedding stewardship within their daily workflow, to general acceptance that it was a worthwhile and meaningful endeavor. We posited that this transition in belief and behavior evolved for several reasons. First, each group was educated about their own, personal prescribing practices from the outset rather than presenting abstract data. This allowed for ownership of the problem and buy‐in to improve it. Second, participants were able to experience the benefits at an individual level while the interventions were ongoing (eg, having other providers reciprocate structured documentation during patient handoffs, making antimicrobial plans clearer), reinforcing the achievability of stewardship practices within each group. Additionally, we focused on making small, manageable interventions that did not seem disruptive to hospitalists' daily workflow. For example, 1 group instituted antimicrobial timeouts during preexisting multidisciplinary rounds with clinical pharmacists. Last, project champions had both leadership and frontline roles within their groups and set the example for stewardship practices, which conveyed that this was a priority at the leadership level. These findings are in line with those of Charani et al., who evaluated behavior change strategies that influence antimicrobial prescribing in acute care. The authors found that behavioral determinants and social norms strongly influence prescribing practices in acute care, and that antimicrobial stewardship improvement projects should account for these influences.[8]
We also identified several barriers to antimicrobial stewardship implementation (Table 1) and proposed measures to address these barriers in future improvement efforts. For example, hospital medicine programs without a preexisting clinical pharmacy partnership asked hospitalist leadership for more direct clinical pharmacy involvement, recognizing the importance of a physician‐pharmacy alliance for stewardship efforts. To more effectively embed antimicrobial stewardship into daily routine, several hospitalists suggested standardized order sets for commonly encountered infections, as well as routine feedback on prescribing practices. Furthermore, although our simplified antimicrobial guideline pocket card enhanced access to this information, several colleagues suggested a smart phone application that would make access even easier and less cumbersome. Last, given the concern about the sustainability of antimicrobial stewardship initiatives, we recommended periodic reminders, random medical record review, and re‐education if necessary on our 3 strategies and their purpose.
Our study is not without limitations. Each participating hospitalist group enacted hospital‐specific interventions based on individual hospitalist program needs and goals, and although there was collective discussion, no group was tasked to undertake another group's initiative, thereby limiting generalizability. We did, however, identify common facilitators that could be adapted to a wide variety of hospitalist programs. We also note that our 3 main strategies were included in a recent review of quality indicators for measuring the success of antimicrobial stewardship programs; thus, although details of individual practice may vary, in principle these concepts can help identify areas for improvement within each unique stewardship program.[9] Importantly, we were unable to evaluate the impact of the 3 key improvement strategies on important clinical outcomes such as overall antimicrobial use, complications including CDI, and cost. However, others have found that improvement strategies similar to our 3 key processes are associated with meaningful improvements in clinical outcomes as well as reductions in healthcare costs.[10, 11] Last, long‐ term impact and sustainability were not evaluated. By choosing interventions that were viewed by frontline providers as valuable and attainable, however, we feel that each group will likely continue current practices beyond the initial evaluation timeframe.
Although these 5 hospitalist groups were able to successfully implement several aspects of the 3 key improvement strategies, we recognize that this is only the first step. Further effort is needed to quantify the impact of these improvement efforts on objective patient outcomes such as readmissions, length of stay, and antimicrobial‐related complications, which will better inform our local and national leaders on the inherent clinical and financial gains associated with hospitalist‐led stewardship work. Finally, creative ways to better integrate stewardship activities into existing provider workflows (eg, decision support and automation) will further accelerate improvement efforts.
In summary, hospitalists at 5 diverse institutions successfully implemented key antimicrobial improvement strategies and identified important implementation facilitators and barriers. Future efforts at hospitalist‐led stewardship should focus on strategies to scale‐up interventions and evaluate their impact on clinical outcomes and cost.
Acknowledgements
The authors thank Latoya Kuhn, MPH, for her assistance with statistical analyses. We also thank the clinical pharmacists at each institution for their partnership in stewardship efforts: Patrick Arnold, PharmD, and Matthew Tupps, PharmD, MHA, from University of Michigan Hospital and Health System; and Roland Tam, PharmD, from Emory Johns Creek Hospital.
Disclosures: Dr. Flanders reports consulting fees or honoraria from the Institute for Healthcare Improvement, has provided consultancy to the Society of Hospital Medicine, has served as a reviewer for expert testimony, received honoraria as a visiting lecturer to various hospitals, and has received royalties from publisher John Wiley & Sons. He has also received grant funding from Blue Cross Blue Shield of Michigan and the Agency for Healthcare Research and Quality. Dr. Ko reports consultancy for the American Hospital Association and the Society of Hospital Medicine involving work with catheter‐associated urinary tract infections. Ms. Jacobsen reports grant funding from the Institute for Healthcare Improvement. Dr. Rosenberg reports consultancy for Bristol‐Myers Squibb, Forest Pharmaceuticals, and Pfizer. The funding source for this collaborative was through the Institute for Healthcare Improvement and Centers for Disease Control and Prevention. Funding was provided by the Department of Health and Human Services, the Centers for Disease Control and Prevention, the National Center for Emerging Zoonotic and Infectious Diseases, and the Division of Healthcare Quality Promotion/Office of the Director. Avaris Concepts served as the prime contractor and the Institute for Healthcare Improvement as the subcontractor for the initiative. The findings and conclusions in this report represent the views of the authors and might not reflect the views of the Centers for Disease Control and Prevention. The authors report no conflicts of interest.
Inappropriate antimicrobial use in hospitalized patients is a well‐recognized driver for the development of drug‐resistant organisms and antimicrobial‐related complications such as Clostridium difficile infection (CDI).[1, 2] Infection with C difficile affects nearly 500,000 people annually resulting in higher healthcare expenditures, longer lengths of hospital stay, and nearly 15,000 deaths.[3] Data from the Centers for Disease Control and Prevention (CDC) suggest that a 30% reduction in the use of broad‐spectrum antimicrobials, or a 5% reduction in the proportion of hospitalized patients receiving antimicrobials, could equate to a 26% reduction in CDI.[4] It is estimated that up to 50% of antimicrobial use in the hospital setting may be inappropriate.[5]
Since the Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America published guidelines for developing formal, hospital‐based antimicrobial stewardship programs in 2007, stewardship practices have been adapted by frontline providers to fit day‐to‐day inpatient care.[5] A recent review by Hamilton et al. described several studies in which stewardship practices were imbedded into daily workflows by way of checklists, education reminders, and periodic review of antimicrobial usage, as well as a multicenter pilot of point‐of‐care stewardship interventions successfully implemented by various providers including nursing, pharmacists, and hospitalists.[6]
In response to the CDC's 2010 Get Smart for Healthcare campaign, which focused on stemming antimicrobial resistance and improving antimicrobial use, the Institute for Healthcare Improvement (IHI), in partnership with the CDC, brought together experts in the field to identify practical and feasible target practices for hospital‐based stewardship and created a Driver Diagram to guide implementation efforts (Figure 1). Rohde et al. described the initial pilot testing of these practices, the decision to more actively engage frontline providers, and the 3 key strategies identified as high‐yield improvement targets: enhancing the visibility of antimicrobial use at the point of care, creating easily accessible antimicrobial guidelines for common infections, and the implementation of a 72‐hour timeout after initiation of antimicrobials.[7]
Figure 1
Shown is the Antibiotic Stewardship Driver Diagram that was developed as part of the Centers for Disease Control and Prevention (CDC) and Institute for Healthcare Improvement partnered efforts to stem antimicrobial overuse through the CDC's Get Smart for Healthcare campaign. Eight pilot hospitals were recruited to participate in field testing and to refine the diagram in a variety of settings from September 2011 through June 2012.
In this article, we describe how, in partnership with the IHI and the CDC, the hospital medicine programs at 5 diverse hospitals iteratively tested these 3 strategies with a goal of identifying the barriers and facilitators to effective hospitalist‐led antimicrobial stewardship.
METHODS
Representatives from 5 hospital medicine programs, IHI, and the CDC attended a kick‐off meeting at the CDC in November 2012 to discuss the 3 proposed strategies, examples of prior testing, and ideas for implementation. Each hospitalist provided a high‐level summary of the current state of stewardship efforts at their respective institutions, identified possible future states related to the improvement strategies, and anticipated problems in achieving them. The 3 key strategies are described below.
Improved Documentation/Visibility at Points of Care
Making antimicrobial indication, day of therapy, and anticipated duration transparent in the medical record was the targeted improvement strategy to avoid unnecessary antimicrobial days that can result from provider uncertainty, particularly during patient handoffs. Daily hospitalist documentation was identified as a vehicle through which these aspects of antimicrobial use could be effectively communicated and propagated from provider to provider.
Stewardship educational sessions and/or awareness campaigns were hospitalist led, and were accompanied by follow‐up reminders in the forms of emails, texts, flyers, or conferences. Infectious disease physicians were not directly involved in education but were available for consultation if needed.
Improved Guideline Clarity and Accessibility
Enhancing the availability of guidelines for frequently encountered infections and clarifying key guideline recommendations such as treatment duration were identified as the improvement strategies to help make treatment regimens more appropriate and consistent across providers.
Interventions included designing simplified pocket cards for commonly encountered infections, (see Supporting Information, Appendix A, in the online version of this article), collaborating with infectious disease physicians on guideline development, and dissemination through email, smartphone, and wall flyers, and creation of a continuous medical education module focused on stewardship practices.
72‐Hour Antimicrobial Timeout
The 72‐hour antimicrobial timeout required that hospitalists routinely reassess antimicrobial use 72 hours following antimicrobial initiation, a time when most pertinent culture data had returned. Hospitalists partnered with clinical pharmacists at all sites, and addressed the following questions during each timeout: (1) Does the patient have a condition that requires continued use of antimicrobials? (2) Can the current antimicrobial regimen be tailored based on culture data? (3) What is the anticipated treatment duration? A variety of modifications occurred during timeouts, including broadening or narrowing the antimicrobial regimen based on culture data, switching to an oral antimicrobial, adjusting dose or frequency based on patient‐specific factors, as well as discontinuation of antimicrobials. Following the initial timeout, further adjustments were made as the clinical situation dictated; intermittent partnered timeouts continued during a patient's hospitalization on an individualized basis. Hospitalists were encouraged to independently review new diagnostic information daily and make changes as needed outside the dedicated time‐out sessions. All decisions to adjust antimicrobial regimens were provider driven; no hospitals employed automated antimicrobial discontinuation without provider input.
Implementation and Evaluation
Each site was tasked with conducting small tests of change aimed at implementing at least 1, and ideally all 3 strategies. Small, reasonably achievable interventions were preferred to large hospital‐wide initiatives so that key barriers and facilitators to the change could be quickly identified and addressed.
Methods of data collection varied across institutions and included anonymous physician survey, face‐to‐face physician interviews, and medical record review. Evaluations of hospital‐specific interventions utilized convenience samples to obtain real time, actionable data. Postintervention data were distributed through biweekly calls and compiled at the conclusion of the project. Barriers and facilitators of hospitalist‐centered antimicrobial stewardship collected over the course of the project were reviewed and used to identify common themes.
RESULTS
Participating hospitals included 1 community nonteaching hospital, 2 community teaching hospitals, and 2 academic medical centers. All hospitals used computerized order entry and had prior quality improvement experience; 4 out of 5 hospitals used electronic medical records. Postintervention data on antimicrobial documentation and timeouts were compiled, shared, and successes identified. For example, 2 hospitals saw an increase in complete antimicrobial documentation from 4% and 8% to 51% and 65%, respectively, of medical records reviewed over a 3‐month period. Additionally, cumulative timeout data across all hospitals showed that out of 726 antimicrobial timeouts evaluated, optimization or discontinuation occurred 218 times or 30% of the time.
Each site's key implementation barriers and facilitators were collected. Examples were compiled and common themes emerged (Table 1).
Common Themes of Barriers and Facilitators to Antimicrobial Stewardship Within Each Hospitalist Program With Accompanying Examples
NOTE: Barriers and facilitators were collected during biweekly conference calls as well as upon conclusion of our initiative.
Barriers: What impediments did we experience during our stewardship project?
Schedule and practice variability
Physician variability in structure of antimicrobial documentation
Prescribing etiquette: it's difficult to change course of treatment plan started by a colleague
Competing schedule demands of hospitalist and pharmacist
Skepticism of antimicrobial stewardship importance
Perception of incorporating stewardship practices into daily work as time consuming
Improvement project fatigue from competing quality improvement initiatives
Unclear leadership buy‐in
Focusing too broadly
Choosing large initial interventions, which take significant time/effort to complete and quantify
Setting unrealistic expectations (eg, expecting perfect adherence to documentation, guidelines, or timeout)
Facilitators: What countermeasures did we target to overcome barriers?
Engage the hospitalists
Establish a core part of the hospitalist group as stewardship champions
Speak 1‐on‐1 to colleagues about specific goals and ways to achieve them
Establish buy‐in from leadership
Encourage participation from a multidisciplinary team (eg, bedside nursing, clinical pharmacists)
Collect real time data and feedback
Utilize a data collection tool if possible/engage hospital coders to identify appropriate diagnoses
Define your question and identify baseline data prior to intervention
Give rapid cycle feedback to colleagues that can impact antimicrobial prescribing in real time
Recognize and reward high performers
Limit scope
Start with small, quickly implementable interventions
Identify interventions that are easy to integrate into hospitalist workflow
DISCUSSION
We successfully brought together hospitalists from diverse institutions to undertake small tests of change aimed at 3 key antimicrobial use improvement strategies. Following our interventions, significant improvement in antimicrobial documentation occurred at 2 institutions focusing on this improvement strategy, and 72‐hour timeouts performed across all hospitals tailored antimicrobial use in 30% of the sessions. Through frequent collaborative discussions and information sharing, we were able to identify common barriers and facilitators to hospitalist‐centered stewardship efforts.
Each participating hospital medicine program noticed a gradual shift in thinking among their colleagues, from initial skepticism about embedding stewardship within their daily workflow, to general acceptance that it was a worthwhile and meaningful endeavor. We posited that this transition in belief and behavior evolved for several reasons. First, each group was educated about their own, personal prescribing practices from the outset rather than presenting abstract data. This allowed for ownership of the problem and buy‐in to improve it. Second, participants were able to experience the benefits at an individual level while the interventions were ongoing (eg, having other providers reciprocate structured documentation during patient handoffs, making antimicrobial plans clearer), reinforcing the achievability of stewardship practices within each group. Additionally, we focused on making small, manageable interventions that did not seem disruptive to hospitalists' daily workflow. For example, 1 group instituted antimicrobial timeouts during preexisting multidisciplinary rounds with clinical pharmacists. Last, project champions had both leadership and frontline roles within their groups and set the example for stewardship practices, which conveyed that this was a priority at the leadership level. These findings are in line with those of Charani et al., who evaluated behavior change strategies that influence antimicrobial prescribing in acute care. The authors found that behavioral determinants and social norms strongly influence prescribing practices in acute care, and that antimicrobial stewardship improvement projects should account for these influences.[8]
We also identified several barriers to antimicrobial stewardship implementation (Table 1) and proposed measures to address these barriers in future improvement efforts. For example, hospital medicine programs without a preexisting clinical pharmacy partnership asked hospitalist leadership for more direct clinical pharmacy involvement, recognizing the importance of a physician‐pharmacy alliance for stewardship efforts. To more effectively embed antimicrobial stewardship into daily routine, several hospitalists suggested standardized order sets for commonly encountered infections, as well as routine feedback on prescribing practices. Furthermore, although our simplified antimicrobial guideline pocket card enhanced access to this information, several colleagues suggested a smart phone application that would make access even easier and less cumbersome. Last, given the concern about the sustainability of antimicrobial stewardship initiatives, we recommended periodic reminders, random medical record review, and re‐education if necessary on our 3 strategies and their purpose.
Our study is not without limitations. Each participating hospitalist group enacted hospital‐specific interventions based on individual hospitalist program needs and goals, and although there was collective discussion, no group was tasked to undertake another group's initiative, thereby limiting generalizability. We did, however, identify common facilitators that could be adapted to a wide variety of hospitalist programs. We also note that our 3 main strategies were included in a recent review of quality indicators for measuring the success of antimicrobial stewardship programs; thus, although details of individual practice may vary, in principle these concepts can help identify areas for improvement within each unique stewardship program.[9] Importantly, we were unable to evaluate the impact of the 3 key improvement strategies on important clinical outcomes such as overall antimicrobial use, complications including CDI, and cost. However, others have found that improvement strategies similar to our 3 key processes are associated with meaningful improvements in clinical outcomes as well as reductions in healthcare costs.[10, 11] Last, long‐ term impact and sustainability were not evaluated. By choosing interventions that were viewed by frontline providers as valuable and attainable, however, we feel that each group will likely continue current practices beyond the initial evaluation timeframe.
Although these 5 hospitalist groups were able to successfully implement several aspects of the 3 key improvement strategies, we recognize that this is only the first step. Further effort is needed to quantify the impact of these improvement efforts on objective patient outcomes such as readmissions, length of stay, and antimicrobial‐related complications, which will better inform our local and national leaders on the inherent clinical and financial gains associated with hospitalist‐led stewardship work. Finally, creative ways to better integrate stewardship activities into existing provider workflows (eg, decision support and automation) will further accelerate improvement efforts.
In summary, hospitalists at 5 diverse institutions successfully implemented key antimicrobial improvement strategies and identified important implementation facilitators and barriers. Future efforts at hospitalist‐led stewardship should focus on strategies to scale‐up interventions and evaluate their impact on clinical outcomes and cost.
Acknowledgements
The authors thank Latoya Kuhn, MPH, for her assistance with statistical analyses. We also thank the clinical pharmacists at each institution for their partnership in stewardship efforts: Patrick Arnold, PharmD, and Matthew Tupps, PharmD, MHA, from University of Michigan Hospital and Health System; and Roland Tam, PharmD, from Emory Johns Creek Hospital.
Disclosures: Dr. Flanders reports consulting fees or honoraria from the Institute for Healthcare Improvement, has provided consultancy to the Society of Hospital Medicine, has served as a reviewer for expert testimony, received honoraria as a visiting lecturer to various hospitals, and has received royalties from publisher John Wiley & Sons. He has also received grant funding from Blue Cross Blue Shield of Michigan and the Agency for Healthcare Research and Quality. Dr. Ko reports consultancy for the American Hospital Association and the Society of Hospital Medicine involving work with catheter‐associated urinary tract infections. Ms. Jacobsen reports grant funding from the Institute for Healthcare Improvement. Dr. Rosenberg reports consultancy for Bristol‐Myers Squibb, Forest Pharmaceuticals, and Pfizer. The funding source for this collaborative was through the Institute for Healthcare Improvement and Centers for Disease Control and Prevention. Funding was provided by the Department of Health and Human Services, the Centers for Disease Control and Prevention, the National Center for Emerging Zoonotic and Infectious Diseases, and the Division of Healthcare Quality Promotion/Office of the Director. Avaris Concepts served as the prime contractor and the Institute for Healthcare Improvement as the subcontractor for the initiative. The findings and conclusions in this report represent the views of the authors and might not reflect the views of the Centers for Disease Control and Prevention. The authors report no conflicts of interest.
References
Maragakis LL, Perencevich EN, Cosgrove SE. Clinical and economic burden of antimicrobial resistance. Expert Rev Anti Infect Ther.2008;6(5):751–763.
Roberts RR, Hota B, Ahmad I, et al. Hospital and societal costs of antimicrobial‐resistant infections in a Chicago teaching hospital: implications for antibiotic stewardship. Clin Infect Dis.2009;49(8):1175–1184.
Lessa FC, Mu Y, Bamberg WM, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med.2015;372(9):825–834.
Fridkin S, Baggs J, Fagan R, et al.; Centers for Disease Control and Prevention (CDC). Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep.2014;63(9):194–200.
Dellit TH1, Owens RC, McGowan JE, et al.; Infectious Diseases Society of America; Society for Healthcare Epidemiology of America. Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship. Clin Infect Dis.2007;44(2):159–177.
Hamilton KW, Gerber JS, Moehring R, et al.; Centers for Disease Control and Prevention Epicenters Program. Point‐of‐prescription interventions to improve antimicrobial stewardship. Clin Infect Dis.2015;60(8):1252–1258.
Rohde JM, Jacobsen D, Rosenberg DJ. Role of the hospitalist in antimicrobial stewardship: a review of work completed and description of a multisite collaborative. Clin Ther.2013;35(6):751–757.
Charani E, Edwards R, Sevdalis N, et al. Behavior change strategies to influence antimicrobial prescribing in acute care: a systematic review. Clin Infect Dis.2011;53(7):651–662.
Bosch , Geerlings SE, Natsch S, Prins JM, Hulscher ME. Quality indicators to measure appropriate antibiotic use in hospitalized adults. Clin Infect Dis.2015;60(2):281–291.
Bosso JA, Drew RH. Application of antimicrobial stewardship to optimise management of community acquired pneumonia. Int J Clin Pract.2011;65(7):775–783.
Davey P, Brown E, Charani E, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev.2013;4:CD003543.
References
Maragakis LL, Perencevich EN, Cosgrove SE. Clinical and economic burden of antimicrobial resistance. Expert Rev Anti Infect Ther.2008;6(5):751–763.
Roberts RR, Hota B, Ahmad I, et al. Hospital and societal costs of antimicrobial‐resistant infections in a Chicago teaching hospital: implications for antibiotic stewardship. Clin Infect Dis.2009;49(8):1175–1184.
Lessa FC, Mu Y, Bamberg WM, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med.2015;372(9):825–834.
Fridkin S, Baggs J, Fagan R, et al.; Centers for Disease Control and Prevention (CDC). Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep.2014;63(9):194–200.
Dellit TH1, Owens RC, McGowan JE, et al.; Infectious Diseases Society of America; Society for Healthcare Epidemiology of America. Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship. Clin Infect Dis.2007;44(2):159–177.
Hamilton KW, Gerber JS, Moehring R, et al.; Centers for Disease Control and Prevention Epicenters Program. Point‐of‐prescription interventions to improve antimicrobial stewardship. Clin Infect Dis.2015;60(8):1252–1258.
Rohde JM, Jacobsen D, Rosenberg DJ. Role of the hospitalist in antimicrobial stewardship: a review of work completed and description of a multisite collaborative. Clin Ther.2013;35(6):751–757.
Charani E, Edwards R, Sevdalis N, et al. Behavior change strategies to influence antimicrobial prescribing in acute care: a systematic review. Clin Infect Dis.2011;53(7):651–662.
Bosch , Geerlings SE, Natsch S, Prins JM, Hulscher ME. Quality indicators to measure appropriate antibiotic use in hospitalized adults. Clin Infect Dis.2015;60(2):281–291.
Bosso JA, Drew RH. Application of antimicrobial stewardship to optimise management of community acquired pneumonia. Int J Clin Pract.2011;65(7):775–783.
Davey P, Brown E, Charani E, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev.2013;4:CD003543.
Address for correspondence and reprint requests: Megan R. Mack, MD, Clinical Instructor, Hospitalist Program, 3119 Taubman Center, 1500 E. Medical Center Drive, SPC 5376, Ann Arbor, MI 48109; Telephone: 734‐647‐0332; Fax: 734-232-9343; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Hospital medicine has grown tremendously since its inception in the 1990s.[1, 2] This expansion has led to the firm establishment of hospitalists in medical education, quality improvement (QI), research, subspecialty comanagement, and administration.[3, 4, 5]
This growth has also created new challenges. The training needs for the next generation of hospitalists are changing given the expanded clinical duties expected of hospitalists.[6, 7, 8] Prior surveys have suggested that some graduates employed as hospitalists have reported feeling underprepared in the areas of surgical comanagement, neurology, geriatrics, palliative care, and navigating the interdisciplinary care system.[9, 10]
In keeping with national trends, the number of residents interested in hospital medicine at our institution has dramatically increased. As internal medicine residents interested in careers in hospitalist medicine, we felt that improving hospitalist training at our institution was imperative given the increasing scope of practice and job competitiveness.[11, 12] We therefore sought to design and implement a hospitalist curriculum within our residency. In this article, we describe the genesis of our program, our final product, and the challenges of creating a curriculum while being internal medicine residents.
METHODS
Needs Assessment
To improve hospitalist training at our institution, we first performed a needs assessment. We contacted recent hospitalist graduates and current faculty to identify aspects of their clinical duties that may have been underemphasized during their training. Next, we performed a literature search in PubMed using the combined terms of hospitalist, hospital medicine, residency, education, training gaps, or curriculum. Based on these efforts, we developed a resident survey that assessed their attitudes toward various components of a potential curriculum. The survey was sent to all categorical internal medicine residents at our institution in December 2014. The survey specified that the respondents only include those who were interested in careers in hospital medicine. Responses were measured using a 5‐point Likert scale (1 = least important to 5 = most important).
Curriculum Development
Our intention was to develop a well‐rounded program that utilized mentorship, research, and clinical experience to augment our learner's knowledge and skills for a successful, long‐term career in the increasingly competitive field of hospital medicine. When designing our curriculum, we accounted for our program's current rotational requirements and local culture. Several previously identified underemphasized areas within hospital medicine, such as palliative care and neurology, were already required rotations at our program.[3, 4, 5] Therefore, any proposed curricular changes would need to mold into program requirements while still providing a preparatory experience in hospital medicine beyond what our current rotations offered. We felt this could be accomplished by including rotations that could provide specific skills pertinent to hospital medicine, such as ultrasound diagnostics or QI.
Key Differences in Curriculum Requirements Between Our Internal Medicine Residency Program and the Hospitalist Curriculum
Rotation
Non‐SHAPE
SHAPE
NOTE: Abbreviations: ICU, intensive care unit; SHAPE, Stanford Hospitalist Advanced Practice and Education.
ICU
At least 12 weeks
At least 16 weeks
Medical wards
At least 16 weeks
At least 16 weeks
Ultrasound diagnostics
Elective
Required
Quality improvement
Elective
Required
Surgical comanagement
Elective
Required
Medicine consult
Elective
Required
Neurology
Required
Required
Palliative care
Required
Required
Meeting With Stakeholders
We presented our curriculum proposal to the chief of the Stanford Hospital Medicine Program. We identified her early in the process to be our primary mentor, and she proved instrumental in being an advocate. After several meetings with the hospitalist group to further develop our program, we presented it to the residency program leadership who helped us to finalize our program.
RESULTS
Needs Assessment
Twenty‐two out of 111 categorical residents in our program (19.8%) identified themselves as interested in hospital medicine and responded to the survey. There were several areas of a potential hospitalist curriculum that the residents identified as important (defined as 4 or 5 on a 5‐point Likert scale). These areas included mentorship (90.9% of residents; mean 4.6, standard deviation [SD] 0.7), opportunities to teach (86.3%; mean 4.4, SD 0.9), and the establishment of a formal hospitalist curriculum (85.7%; mean 4.2, SD 0.8). The residents also identified several rotations that would be beneficial (defined as a 4 or 5 on a 5‐point Likert scale). These included medicine consult/procedures team (95.5% of residents; mean 4.7, SD 0.6), point‐of‐care ultrasound diagnostics (90.8%; mean 4.7, SD 0.8), and a community hospitalist preceptorship (86.4%; mean 4.4, SD 1.0). The residents also identified several rotations deemed to be of lesser benefit. These rotations included inpatient neurology (only 27.3% of residents; mean 3.2, SD 0.8) and palliative care (50.0%; mean 3.5, SD 1.0).
The Final Product: A Hospitalist Training Curriculum
Based on the needs assessment and meetings with program leadership, we designed a hospitalist program and named it the Stanford Hospitalist Advanced Practice and Education (SHAPE) program. The program was based on 3 core principles: (1) clinical excellence: by training in hospitalist‐relevant clinical areas, (2) academic development: with required research, QI, and teaching, and (3) career mentorship.
Clinical Excellence By Training in Hospitalist‐Relevant Clinical Areas
The SHAPE curriculum builds off of our institution's current curriculum with additional required rotations to improve the resident's skillsets. These included ultrasound diagnostics, surgical comanagement, and QI (Box 1). Given that some hospitalists work in an open intensive care unit (ICU), we increased the amount of required ICU time to provide expanded procedural and critical care experiences. The residents also receive 10 seminars focused on hospital medicine, including patient safety, QI, and career development (Box 1).
Box
The Stanford Hospitalist Advanced Practice and Education (SHAPE) program curriculum. Members of the program are required to complete the requirements listed before the end of their third year. Note that the clinical rotations are spread over the 3 years of residency.
Stanford Hospitalist Advanced Practice and Education Required Clinical Rotations
Medicine Consult (24 weeks)
Critical Care (16 weeks)
Ultrasound Diagnostics (2 weeks)
Quality Improvement (4 weeks)
Inpatient Neurology (2 weeks)
Palliative Care (2 weeks)
Surgical Comanagement (2 weeks)
Required Nonclinical Work
Quality improvement, clinical or educational project with a presentation at an academic conference or manuscript submission in a peer‐reviewed journal
Enrollment in the Stanford Faculty Development Center workshop on effective clinical teaching
Attendance at the hospitalist lecture series (10 lectures): patient safety, hospital efficiency, fundamentals of perioperative medicine, healthcare structure and changing reimbursement patterns, patient handoff, career development, prevention of burnout, inpatient nutrition, hospitalist research, and lean modeling in the hospital setting
Mentorship
Each participant is matched with 3 hospitalist mentors in order to provide comprehensive career and personal mentorship
Academic Development With Required Research and Teaching
SHAPE program residents are required to develop a QI, education, or clinical research project before graduation. They are required to present their work at a hospitalist conference or submit to a peer‐reviewed journal. They are also encouraged to attend the Society of Hospital Medicine annual meeting for their own career development.
SHAPE program residents also have increased opportunities to improve their teaching skills. The residents are enrolled in a clinical teaching workshop. Furthermore, the residents are responsible for leading regular lectures regarding common inpatient conditions for first‐ and second‐year medical students enrolled in a transitions‐of‐care elective.
Career Mentorship
Each resident is paired with 3 faculty hospitalists who have different areas of expertise (ie, clinical teaching, surgical comanagement, QI). They individually meet on a quarterly basis to discuss their career development and research projects. The SHAPE program will also host an annual resume‐development and career workshop.
SHAPE Resident Characteristics
In its first year, 13 of 25 residents (52%) interested in hospital medicine enrolled in the program. The SHAPE residents were predominantly second‐year residents (11 residents, 84.6%).
Among the 12 residents who did not enroll, there were 7 seniors (58.3%) who would soon be graduating and would not be eligible.
DISCUSSION
The training needs of aspiring hospitalists are changing as the scope of hospital medicine has expanded.[6] Residency programs can facilitate this by implementing a hospitalist curriculum that augments training and provides focused mentorship.[13, 14] An emphasis on resident leadership within these programs ensures positive housestaff buy‐in and satisfaction.
There were several key lessons we learned while designing our curriculum because of our unique role as residents and curriculum founders. This included the early engagement of departmental leadership as mentors. They assisted us in integrating our program within the existing internal medicine residency and the selection of electives. It was also imperative to secure adequate buy‐in from the academic hospitalists at our institution, as they would be our primary source of faculty mentors and lecturers.
A second challenge was balancing curriculum requirements and ensuring adequate buy‐in from our residents. The residents had fewer electives over their second and third years. However, this was balanced by the fact that the residents were given first preference on historically desirable rotations at our institution (including ultrasound, medicine consult, and QI). Furthermore, we purposefully included current resident opinions when performing our needs assessment to ensure adequate buy‐in. Surprisingly, the residents found several key rotations to be of low importance in our needs assessment, such as palliative care and inpatient neurology. Although this may seem confounding, several of these rotations (ie, neurology and palliative care) are already required of all residents at our program. It may be that some residents feel comfortable in these areas based on their previous experiences. Alternatively, this result may represent a lack of knowledge on the residents' part of what skill sets are imperative for career hospitalists. [4, 6]
Finally, we recognize that our program was based on our local needs assessment. Other residency programs may already have similar curricula built into their rotation schedule. In those instances, a hospitalist curriculum that emphasizes scholarly advancement and mentorship may be more appropriate.
CONCLUSIONS AND FUTURE DIRECTIONS
At out institution, we have created a hospitalist program designed to train the next generation of hospitalists with improved clinical, research, and teaching skills. Our cohort of residents will be observed over the next year, and we will administer a follow‐up study to assess the effectiveness of the program.
Acknowledgements
The authors acknowledge Karina Delgado, program manager at Stanford's internal medicine residency, for providing data on recent graduate plans.
Disclosures: Andre Kumar, MD, and Andrea Smeraglio, MD, are cofirst authors. The authors report no conflicts of interest.
Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med.2011;6(4):10–13.
Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV. The spectrum of community based hospitalist practice: A call to tailor internal medicine residency training. Arch Intern Med.2007;167:727–729.
Pham HH, Devers KJ, Kuo S, Berenson R. Health care market trends and the evolution of hospitalist use and roles. J Gen Intern Med.2005;20(2):101–107.
Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Survey of the National Association of Inpatient Physicians. Ann Intern Med.1999:343–349.
Goldenberg J, Glasheen JJ. Hospitalist educators: future of inpatient internal medicine training. Mt Sinai J Med.2008;75(5):430–435.
Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs. J Gen Intern Med.2008;23(7):1110–1115.
Arora V, Guardiano S, Donaldson D, Storch I, Hemstreet P. Closing the gap between internal medicine training and practice: recommendations from recent graduates. Am J Med.2005;118(6):680–685
Chaudhry SI, Lien C, Ehrlich J, et al. Curricular content of internal medicine residency programs: a nationwide report. Am J Med.2014;127(12):1247–1254.
Plauth WH, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists' perceptions of their residency training needs: results of a national survey. Am J Med.2001;111(3):247–254.
Holmboe ES, Bowen JL, Green M, et al. Reforming internal medicine residency training: a report from the Society of General Internal Medicine's Task Force for Residency Reform. J Gen Intern Med.2005;20(12):1165–1172.
Goodman PH, Januska A. Clinical hospital medicine fellowships: perspectives of employers, hospitalists, and medicine residents. J Hosp Med.2008;3(1):28–34.
Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the Academic hospital medicine Summit. J Hosp Med.2009;4(4):240–246.
Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine's promise through internal medicine residency redesign. Mt Sinai J Med.2008;75(5):436–441.
Hauer , Karen E, Flanders , Scott A, Wachter RM. Training Future Hospitalists. Cult Med.1999;171(12):367–370.
Hospital medicine has grown tremendously since its inception in the 1990s.[1, 2] This expansion has led to the firm establishment of hospitalists in medical education, quality improvement (QI), research, subspecialty comanagement, and administration.[3, 4, 5]
This growth has also created new challenges. The training needs for the next generation of hospitalists are changing given the expanded clinical duties expected of hospitalists.[6, 7, 8] Prior surveys have suggested that some graduates employed as hospitalists have reported feeling underprepared in the areas of surgical comanagement, neurology, geriatrics, palliative care, and navigating the interdisciplinary care system.[9, 10]
In keeping with national trends, the number of residents interested in hospital medicine at our institution has dramatically increased. As internal medicine residents interested in careers in hospitalist medicine, we felt that improving hospitalist training at our institution was imperative given the increasing scope of practice and job competitiveness.[11, 12] We therefore sought to design and implement a hospitalist curriculum within our residency. In this article, we describe the genesis of our program, our final product, and the challenges of creating a curriculum while being internal medicine residents.
METHODS
Needs Assessment
To improve hospitalist training at our institution, we first performed a needs assessment. We contacted recent hospitalist graduates and current faculty to identify aspects of their clinical duties that may have been underemphasized during their training. Next, we performed a literature search in PubMed using the combined terms of hospitalist, hospital medicine, residency, education, training gaps, or curriculum. Based on these efforts, we developed a resident survey that assessed their attitudes toward various components of a potential curriculum. The survey was sent to all categorical internal medicine residents at our institution in December 2014. The survey specified that the respondents only include those who were interested in careers in hospital medicine. Responses were measured using a 5‐point Likert scale (1 = least important to 5 = most important).
Curriculum Development
Our intention was to develop a well‐rounded program that utilized mentorship, research, and clinical experience to augment our learner's knowledge and skills for a successful, long‐term career in the increasingly competitive field of hospital medicine. When designing our curriculum, we accounted for our program's current rotational requirements and local culture. Several previously identified underemphasized areas within hospital medicine, such as palliative care and neurology, were already required rotations at our program.[3, 4, 5] Therefore, any proposed curricular changes would need to mold into program requirements while still providing a preparatory experience in hospital medicine beyond what our current rotations offered. We felt this could be accomplished by including rotations that could provide specific skills pertinent to hospital medicine, such as ultrasound diagnostics or QI.
Key Differences in Curriculum Requirements Between Our Internal Medicine Residency Program and the Hospitalist Curriculum
Rotation
Non‐SHAPE
SHAPE
NOTE: Abbreviations: ICU, intensive care unit; SHAPE, Stanford Hospitalist Advanced Practice and Education.
ICU
At least 12 weeks
At least 16 weeks
Medical wards
At least 16 weeks
At least 16 weeks
Ultrasound diagnostics
Elective
Required
Quality improvement
Elective
Required
Surgical comanagement
Elective
Required
Medicine consult
Elective
Required
Neurology
Required
Required
Palliative care
Required
Required
Meeting With Stakeholders
We presented our curriculum proposal to the chief of the Stanford Hospital Medicine Program. We identified her early in the process to be our primary mentor, and she proved instrumental in being an advocate. After several meetings with the hospitalist group to further develop our program, we presented it to the residency program leadership who helped us to finalize our program.
RESULTS
Needs Assessment
Twenty‐two out of 111 categorical residents in our program (19.8%) identified themselves as interested in hospital medicine and responded to the survey. There were several areas of a potential hospitalist curriculum that the residents identified as important (defined as 4 or 5 on a 5‐point Likert scale). These areas included mentorship (90.9% of residents; mean 4.6, standard deviation [SD] 0.7), opportunities to teach (86.3%; mean 4.4, SD 0.9), and the establishment of a formal hospitalist curriculum (85.7%; mean 4.2, SD 0.8). The residents also identified several rotations that would be beneficial (defined as a 4 or 5 on a 5‐point Likert scale). These included medicine consult/procedures team (95.5% of residents; mean 4.7, SD 0.6), point‐of‐care ultrasound diagnostics (90.8%; mean 4.7, SD 0.8), and a community hospitalist preceptorship (86.4%; mean 4.4, SD 1.0). The residents also identified several rotations deemed to be of lesser benefit. These rotations included inpatient neurology (only 27.3% of residents; mean 3.2, SD 0.8) and palliative care (50.0%; mean 3.5, SD 1.0).
The Final Product: A Hospitalist Training Curriculum
Based on the needs assessment and meetings with program leadership, we designed a hospitalist program and named it the Stanford Hospitalist Advanced Practice and Education (SHAPE) program. The program was based on 3 core principles: (1) clinical excellence: by training in hospitalist‐relevant clinical areas, (2) academic development: with required research, QI, and teaching, and (3) career mentorship.
Clinical Excellence By Training in Hospitalist‐Relevant Clinical Areas
The SHAPE curriculum builds off of our institution's current curriculum with additional required rotations to improve the resident's skillsets. These included ultrasound diagnostics, surgical comanagement, and QI (Box 1). Given that some hospitalists work in an open intensive care unit (ICU), we increased the amount of required ICU time to provide expanded procedural and critical care experiences. The residents also receive 10 seminars focused on hospital medicine, including patient safety, QI, and career development (Box 1).
Box
The Stanford Hospitalist Advanced Practice and Education (SHAPE) program curriculum. Members of the program are required to complete the requirements listed before the end of their third year. Note that the clinical rotations are spread over the 3 years of residency.
Stanford Hospitalist Advanced Practice and Education Required Clinical Rotations
Medicine Consult (24 weeks)
Critical Care (16 weeks)
Ultrasound Diagnostics (2 weeks)
Quality Improvement (4 weeks)
Inpatient Neurology (2 weeks)
Palliative Care (2 weeks)
Surgical Comanagement (2 weeks)
Required Nonclinical Work
Quality improvement, clinical or educational project with a presentation at an academic conference or manuscript submission in a peer‐reviewed journal
Enrollment in the Stanford Faculty Development Center workshop on effective clinical teaching
Attendance at the hospitalist lecture series (10 lectures): patient safety, hospital efficiency, fundamentals of perioperative medicine, healthcare structure and changing reimbursement patterns, patient handoff, career development, prevention of burnout, inpatient nutrition, hospitalist research, and lean modeling in the hospital setting
Mentorship
Each participant is matched with 3 hospitalist mentors in order to provide comprehensive career and personal mentorship
Academic Development With Required Research and Teaching
SHAPE program residents are required to develop a QI, education, or clinical research project before graduation. They are required to present their work at a hospitalist conference or submit to a peer‐reviewed journal. They are also encouraged to attend the Society of Hospital Medicine annual meeting for their own career development.
SHAPE program residents also have increased opportunities to improve their teaching skills. The residents are enrolled in a clinical teaching workshop. Furthermore, the residents are responsible for leading regular lectures regarding common inpatient conditions for first‐ and second‐year medical students enrolled in a transitions‐of‐care elective.
Career Mentorship
Each resident is paired with 3 faculty hospitalists who have different areas of expertise (ie, clinical teaching, surgical comanagement, QI). They individually meet on a quarterly basis to discuss their career development and research projects. The SHAPE program will also host an annual resume‐development and career workshop.
SHAPE Resident Characteristics
In its first year, 13 of 25 residents (52%) interested in hospital medicine enrolled in the program. The SHAPE residents were predominantly second‐year residents (11 residents, 84.6%).
Among the 12 residents who did not enroll, there were 7 seniors (58.3%) who would soon be graduating and would not be eligible.
DISCUSSION
The training needs of aspiring hospitalists are changing as the scope of hospital medicine has expanded.[6] Residency programs can facilitate this by implementing a hospitalist curriculum that augments training and provides focused mentorship.[13, 14] An emphasis on resident leadership within these programs ensures positive housestaff buy‐in and satisfaction.
There were several key lessons we learned while designing our curriculum because of our unique role as residents and curriculum founders. This included the early engagement of departmental leadership as mentors. They assisted us in integrating our program within the existing internal medicine residency and the selection of electives. It was also imperative to secure adequate buy‐in from the academic hospitalists at our institution, as they would be our primary source of faculty mentors and lecturers.
A second challenge was balancing curriculum requirements and ensuring adequate buy‐in from our residents. The residents had fewer electives over their second and third years. However, this was balanced by the fact that the residents were given first preference on historically desirable rotations at our institution (including ultrasound, medicine consult, and QI). Furthermore, we purposefully included current resident opinions when performing our needs assessment to ensure adequate buy‐in. Surprisingly, the residents found several key rotations to be of low importance in our needs assessment, such as palliative care and inpatient neurology. Although this may seem confounding, several of these rotations (ie, neurology and palliative care) are already required of all residents at our program. It may be that some residents feel comfortable in these areas based on their previous experiences. Alternatively, this result may represent a lack of knowledge on the residents' part of what skill sets are imperative for career hospitalists. [4, 6]
Finally, we recognize that our program was based on our local needs assessment. Other residency programs may already have similar curricula built into their rotation schedule. In those instances, a hospitalist curriculum that emphasizes scholarly advancement and mentorship may be more appropriate.
CONCLUSIONS AND FUTURE DIRECTIONS
At out institution, we have created a hospitalist program designed to train the next generation of hospitalists with improved clinical, research, and teaching skills. Our cohort of residents will be observed over the next year, and we will administer a follow‐up study to assess the effectiveness of the program.
Acknowledgements
The authors acknowledge Karina Delgado, program manager at Stanford's internal medicine residency, for providing data on recent graduate plans.
Disclosures: Andre Kumar, MD, and Andrea Smeraglio, MD, are cofirst authors. The authors report no conflicts of interest.
Hospital medicine has grown tremendously since its inception in the 1990s.[1, 2] This expansion has led to the firm establishment of hospitalists in medical education, quality improvement (QI), research, subspecialty comanagement, and administration.[3, 4, 5]
This growth has also created new challenges. The training needs for the next generation of hospitalists are changing given the expanded clinical duties expected of hospitalists.[6, 7, 8] Prior surveys have suggested that some graduates employed as hospitalists have reported feeling underprepared in the areas of surgical comanagement, neurology, geriatrics, palliative care, and navigating the interdisciplinary care system.[9, 10]
In keeping with national trends, the number of residents interested in hospital medicine at our institution has dramatically increased. As internal medicine residents interested in careers in hospitalist medicine, we felt that improving hospitalist training at our institution was imperative given the increasing scope of practice and job competitiveness.[11, 12] We therefore sought to design and implement a hospitalist curriculum within our residency. In this article, we describe the genesis of our program, our final product, and the challenges of creating a curriculum while being internal medicine residents.
METHODS
Needs Assessment
To improve hospitalist training at our institution, we first performed a needs assessment. We contacted recent hospitalist graduates and current faculty to identify aspects of their clinical duties that may have been underemphasized during their training. Next, we performed a literature search in PubMed using the combined terms of hospitalist, hospital medicine, residency, education, training gaps, or curriculum. Based on these efforts, we developed a resident survey that assessed their attitudes toward various components of a potential curriculum. The survey was sent to all categorical internal medicine residents at our institution in December 2014. The survey specified that the respondents only include those who were interested in careers in hospital medicine. Responses were measured using a 5‐point Likert scale (1 = least important to 5 = most important).
Curriculum Development
Our intention was to develop a well‐rounded program that utilized mentorship, research, and clinical experience to augment our learner's knowledge and skills for a successful, long‐term career in the increasingly competitive field of hospital medicine. When designing our curriculum, we accounted for our program's current rotational requirements and local culture. Several previously identified underemphasized areas within hospital medicine, such as palliative care and neurology, were already required rotations at our program.[3, 4, 5] Therefore, any proposed curricular changes would need to mold into program requirements while still providing a preparatory experience in hospital medicine beyond what our current rotations offered. We felt this could be accomplished by including rotations that could provide specific skills pertinent to hospital medicine, such as ultrasound diagnostics or QI.
Key Differences in Curriculum Requirements Between Our Internal Medicine Residency Program and the Hospitalist Curriculum
Rotation
Non‐SHAPE
SHAPE
NOTE: Abbreviations: ICU, intensive care unit; SHAPE, Stanford Hospitalist Advanced Practice and Education.
ICU
At least 12 weeks
At least 16 weeks
Medical wards
At least 16 weeks
At least 16 weeks
Ultrasound diagnostics
Elective
Required
Quality improvement
Elective
Required
Surgical comanagement
Elective
Required
Medicine consult
Elective
Required
Neurology
Required
Required
Palliative care
Required
Required
Meeting With Stakeholders
We presented our curriculum proposal to the chief of the Stanford Hospital Medicine Program. We identified her early in the process to be our primary mentor, and she proved instrumental in being an advocate. After several meetings with the hospitalist group to further develop our program, we presented it to the residency program leadership who helped us to finalize our program.
RESULTS
Needs Assessment
Twenty‐two out of 111 categorical residents in our program (19.8%) identified themselves as interested in hospital medicine and responded to the survey. There were several areas of a potential hospitalist curriculum that the residents identified as important (defined as 4 or 5 on a 5‐point Likert scale). These areas included mentorship (90.9% of residents; mean 4.6, standard deviation [SD] 0.7), opportunities to teach (86.3%; mean 4.4, SD 0.9), and the establishment of a formal hospitalist curriculum (85.7%; mean 4.2, SD 0.8). The residents also identified several rotations that would be beneficial (defined as a 4 or 5 on a 5‐point Likert scale). These included medicine consult/procedures team (95.5% of residents; mean 4.7, SD 0.6), point‐of‐care ultrasound diagnostics (90.8%; mean 4.7, SD 0.8), and a community hospitalist preceptorship (86.4%; mean 4.4, SD 1.0). The residents also identified several rotations deemed to be of lesser benefit. These rotations included inpatient neurology (only 27.3% of residents; mean 3.2, SD 0.8) and palliative care (50.0%; mean 3.5, SD 1.0).
The Final Product: A Hospitalist Training Curriculum
Based on the needs assessment and meetings with program leadership, we designed a hospitalist program and named it the Stanford Hospitalist Advanced Practice and Education (SHAPE) program. The program was based on 3 core principles: (1) clinical excellence: by training in hospitalist‐relevant clinical areas, (2) academic development: with required research, QI, and teaching, and (3) career mentorship.
Clinical Excellence By Training in Hospitalist‐Relevant Clinical Areas
The SHAPE curriculum builds off of our institution's current curriculum with additional required rotations to improve the resident's skillsets. These included ultrasound diagnostics, surgical comanagement, and QI (Box 1). Given that some hospitalists work in an open intensive care unit (ICU), we increased the amount of required ICU time to provide expanded procedural and critical care experiences. The residents also receive 10 seminars focused on hospital medicine, including patient safety, QI, and career development (Box 1).
Box
The Stanford Hospitalist Advanced Practice and Education (SHAPE) program curriculum. Members of the program are required to complete the requirements listed before the end of their third year. Note that the clinical rotations are spread over the 3 years of residency.
Stanford Hospitalist Advanced Practice and Education Required Clinical Rotations
Medicine Consult (24 weeks)
Critical Care (16 weeks)
Ultrasound Diagnostics (2 weeks)
Quality Improvement (4 weeks)
Inpatient Neurology (2 weeks)
Palliative Care (2 weeks)
Surgical Comanagement (2 weeks)
Required Nonclinical Work
Quality improvement, clinical or educational project with a presentation at an academic conference or manuscript submission in a peer‐reviewed journal
Enrollment in the Stanford Faculty Development Center workshop on effective clinical teaching
Attendance at the hospitalist lecture series (10 lectures): patient safety, hospital efficiency, fundamentals of perioperative medicine, healthcare structure and changing reimbursement patterns, patient handoff, career development, prevention of burnout, inpatient nutrition, hospitalist research, and lean modeling in the hospital setting
Mentorship
Each participant is matched with 3 hospitalist mentors in order to provide comprehensive career and personal mentorship
Academic Development With Required Research and Teaching
SHAPE program residents are required to develop a QI, education, or clinical research project before graduation. They are required to present their work at a hospitalist conference or submit to a peer‐reviewed journal. They are also encouraged to attend the Society of Hospital Medicine annual meeting for their own career development.
SHAPE program residents also have increased opportunities to improve their teaching skills. The residents are enrolled in a clinical teaching workshop. Furthermore, the residents are responsible for leading regular lectures regarding common inpatient conditions for first‐ and second‐year medical students enrolled in a transitions‐of‐care elective.
Career Mentorship
Each resident is paired with 3 faculty hospitalists who have different areas of expertise (ie, clinical teaching, surgical comanagement, QI). They individually meet on a quarterly basis to discuss their career development and research projects. The SHAPE program will also host an annual resume‐development and career workshop.
SHAPE Resident Characteristics
In its first year, 13 of 25 residents (52%) interested in hospital medicine enrolled in the program. The SHAPE residents were predominantly second‐year residents (11 residents, 84.6%).
Among the 12 residents who did not enroll, there were 7 seniors (58.3%) who would soon be graduating and would not be eligible.
DISCUSSION
The training needs of aspiring hospitalists are changing as the scope of hospital medicine has expanded.[6] Residency programs can facilitate this by implementing a hospitalist curriculum that augments training and provides focused mentorship.[13, 14] An emphasis on resident leadership within these programs ensures positive housestaff buy‐in and satisfaction.
There were several key lessons we learned while designing our curriculum because of our unique role as residents and curriculum founders. This included the early engagement of departmental leadership as mentors. They assisted us in integrating our program within the existing internal medicine residency and the selection of electives. It was also imperative to secure adequate buy‐in from the academic hospitalists at our institution, as they would be our primary source of faculty mentors and lecturers.
A second challenge was balancing curriculum requirements and ensuring adequate buy‐in from our residents. The residents had fewer electives over their second and third years. However, this was balanced by the fact that the residents were given first preference on historically desirable rotations at our institution (including ultrasound, medicine consult, and QI). Furthermore, we purposefully included current resident opinions when performing our needs assessment to ensure adequate buy‐in. Surprisingly, the residents found several key rotations to be of low importance in our needs assessment, such as palliative care and inpatient neurology. Although this may seem confounding, several of these rotations (ie, neurology and palliative care) are already required of all residents at our program. It may be that some residents feel comfortable in these areas based on their previous experiences. Alternatively, this result may represent a lack of knowledge on the residents' part of what skill sets are imperative for career hospitalists. [4, 6]
Finally, we recognize that our program was based on our local needs assessment. Other residency programs may already have similar curricula built into their rotation schedule. In those instances, a hospitalist curriculum that emphasizes scholarly advancement and mentorship may be more appropriate.
CONCLUSIONS AND FUTURE DIRECTIONS
At out institution, we have created a hospitalist program designed to train the next generation of hospitalists with improved clinical, research, and teaching skills. Our cohort of residents will be observed over the next year, and we will administer a follow‐up study to assess the effectiveness of the program.
Acknowledgements
The authors acknowledge Karina Delgado, program manager at Stanford's internal medicine residency, for providing data on recent graduate plans.
Disclosures: Andre Kumar, MD, and Andrea Smeraglio, MD, are cofirst authors. The authors report no conflicts of interest.
References
Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med.2011;6(4):10–13.
Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV. The spectrum of community based hospitalist practice: A call to tailor internal medicine residency training. Arch Intern Med.2007;167:727–729.
Pham HH, Devers KJ, Kuo S, Berenson R. Health care market trends and the evolution of hospitalist use and roles. J Gen Intern Med.2005;20(2):101–107.
Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Survey of the National Association of Inpatient Physicians. Ann Intern Med.1999:343–349.
Goldenberg J, Glasheen JJ. Hospitalist educators: future of inpatient internal medicine training. Mt Sinai J Med.2008;75(5):430–435.
Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs. J Gen Intern Med.2008;23(7):1110–1115.
Arora V, Guardiano S, Donaldson D, Storch I, Hemstreet P. Closing the gap between internal medicine training and practice: recommendations from recent graduates. Am J Med.2005;118(6):680–685
Chaudhry SI, Lien C, Ehrlich J, et al. Curricular content of internal medicine residency programs: a nationwide report. Am J Med.2014;127(12):1247–1254.
Plauth WH, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists' perceptions of their residency training needs: results of a national survey. Am J Med.2001;111(3):247–254.
Holmboe ES, Bowen JL, Green M, et al. Reforming internal medicine residency training: a report from the Society of General Internal Medicine's Task Force for Residency Reform. J Gen Intern Med.2005;20(12):1165–1172.
Goodman PH, Januska A. Clinical hospital medicine fellowships: perspectives of employers, hospitalists, and medicine residents. J Hosp Med.2008;3(1):28–34.
Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the Academic hospital medicine Summit. J Hosp Med.2009;4(4):240–246.
Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine's promise through internal medicine residency redesign. Mt Sinai J Med.2008;75(5):436–441.
Hauer , Karen E, Flanders , Scott A, Wachter RM. Training Future Hospitalists. Cult Med.1999;171(12):367–370.
References
Wachter RM. The hospitalist field turns 15: new opportunities and challenges. J Hosp Med.2011;6(4):10–13.
Glasheen JJ, Epstein KR, Siegal E, Kutner JS, Prochazka AV. The spectrum of community based hospitalist practice: A call to tailor internal medicine residency training. Arch Intern Med.2007;167:727–729.
Pham HH, Devers KJ, Kuo S, Berenson R. Health care market trends and the evolution of hospitalist use and roles. J Gen Intern Med.2005;20(2):101–107.
Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Survey of the National Association of Inpatient Physicians. Ann Intern Med.1999:343–349.
Goldenberg J, Glasheen JJ. Hospitalist educators: future of inpatient internal medicine training. Mt Sinai J Med.2008;75(5):430–435.
Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists' needs. J Gen Intern Med.2008;23(7):1110–1115.
Arora V, Guardiano S, Donaldson D, Storch I, Hemstreet P. Closing the gap between internal medicine training and practice: recommendations from recent graduates. Am J Med.2005;118(6):680–685
Chaudhry SI, Lien C, Ehrlich J, et al. Curricular content of internal medicine residency programs: a nationwide report. Am J Med.2014;127(12):1247–1254.
Plauth WH, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists' perceptions of their residency training needs: results of a national survey. Am J Med.2001;111(3):247–254.
Holmboe ES, Bowen JL, Green M, et al. Reforming internal medicine residency training: a report from the Society of General Internal Medicine's Task Force for Residency Reform. J Gen Intern Med.2005;20(12):1165–1172.
Goodman PH, Januska A. Clinical hospital medicine fellowships: perspectives of employers, hospitalists, and medicine residents. J Hosp Med.2008;3(1):28–34.
Flanders SA, Centor B, Weber V, McGinn T, DeSalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the Academic hospital medicine Summit. J Hosp Med.2009;4(4):240–246.
Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine's promise through internal medicine residency redesign. Mt Sinai J Med.2008;75(5):436–441.
Hauer , Karen E, Flanders , Scott A, Wachter RM. Training Future Hospitalists. Cult Med.1999;171(12):367–370.
Address for correspondence and reprint requests: Andre Kumar, MD, Department of Medicine, Stanford University Hospital, 300 Pasteur Drive, Lane 154, Stanford, CA 94305‐5133; Telephone: 650‐723‐6661; Fax: 650‐498‐6205; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)