Recruiting Hospital Patients for Research

Article Type
Changed
Mon, 05/22/2017 - 18:14
Display Headline
Recruiting hospitalized patients for research: How do participants differ from eligible nonparticipants?

Randomized controlled trials (RCTs) generally provide the most rigorous evidence for clinical practice guidelines and quality‐improvement initiatives. However, 2 major shortcomings limit the ability to broadly apply these results to the general population. One has to do with sampling bias (due to subject consent and inclusion/exclusion criteria) and the other with potential differences between participants and eligible nonparticipants. The latter may be of particular importance in trials of behavioral interventions (rather than medication trials), which often require substantial participant effort.

First, individuals who provide written consent to participate in RCTs of behavioral interventions typically represent a minority of those approached and therefore may not be representative of the target population. Although the consenting proportion is often not disclosed, some estimate that only 35%50% of eligible subjects typically participate.[1, 2, 3] These estimates mirror the authors' prior experience with a 55.2% consent rate among subjects approached for a Medicare quality‐improvement behavioral intervention.[3] Though the literature is sparse, it suggests that eligible individuals who decline to participate in either interventions or usual care may differ from participants in their perception of intervention risks and effort[4] or in their levels of self‐efficacy or confidence in recovery.[5, 6] Relatively low enrollment rates mean that much of the population remains unstudied; however, evidence‐based interventions are often applied to populations broader than those included in the original analyses.

Additionally, although some nonparticipants may correctly decide that they do not need the assistance of a proposed intervention and therefore decline to participate, others may inappropriately judge the intervention's potential benefit and applicability when declining. In other words, electing to not participate in a study, despite eligibility, may reflect more than a refusal of inconvenience, disinterest, or desire to contribute to knowledge; for some individuals it may offer a proxy statement about health knowledge, personal beliefs, attitudes, and needs, including perceived stress,[5] cultural relevance,[7, 8] and literacy/health literacy.[9, 10] Characterizing these patients can help us to modify recruitment approaches and improve participation so that participants better represent the target population. If these differences also relate to patients' adherence to care recommendations, a more nuanced understanding could improve ways to identify and engage potentially nonadherent patients to improve health outcomes.

We hypothesized that we could identify characteristics that differ between behavioral‐intervention participants and eligible nonparticipants using a set of screening questions. We proposed that these characteristics, including constructs related to perceived stress, recovery expectation, health literacy, insight, and action into advance care planning and confusion by any question, would predict the likelihood of consenting to a behavioral intervention requiring substantial subject engagement. Some of these characteristics may relate to adherence to preventive care or treatment recommendations. We did not specifically hypothesize about the distribution of demographic differences.

METHODS

Study Design

Prospective observational study conducted within a larger behavioral intervention.

Screening Question Design

We adapted our screening questions from several previously validated surveys, selecting questions related to perceived stress and self‐efficacy,[11] recovery expectations, health literacy/medication label interpretation,[12] and discussing advance directives (Table 1). Some of these characteristics may relate to adherence to preventive care or treatment programs[13, 14] or to clinical outcomes.[15, 16]

Screening Questions
Screening QuestionAdapted From Original Validated QuestionSourceConstruct
In the last week, how often have you felt that you are unable to control the important things in your life? (Rarely, sometimes, almost always)In the last month, how often have you felt that you were unable to control the important things in your life? (Never, almost never, sometimes, fairly often, very often)Adapted from the Perceived Stress Scale (PSS‐14).[11]Perceived stress, self‐efficacy
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? (Rarely, sometimes, almost always)In the last month, how often have you felt difficulties were piling up so high that you could not overcome them? (Never, almost never, sometimes, fairly often, very often)Adapted from the Perceived Stress Scale (PSS‐14).[11]Perceived stress, self‐efficacy
How sure are you that you can go back to the way you felt before being hospitalized? (Not sure at all, somewhat sure, very sure) Courtesy of Phil Clark, PhD, University of Rhode Island, drawing on research on resilience. Similar questions are used in other studies, including studies of postsurgical recovery.[29, 30, 31]Recovery expectation, resilience
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? (Yes, no) Based on consumer‐targeted materials on advance care planning. http://www.agingwithdignity.org/five‐wishes.php; http://www.nhqualitycampaign.org/files/emmpguides/6_AdvanceCarePlanning_TAW_Guide.pdfAdvance care planning
(Show patient a picture of prescription label.) How many times a day should someone take this medicine? (Correct, incorrect)(Show patient a picture of ice cream label.) If you eat the entire container, how many calories will you eat? (Correct, incorrect)Adapted from Pfizer's Clear Health Communication: The Newest Vital Sign.[12]Health literacy

Prior to administering the screening questions, we performed cognitive testing with residents of an assisted‐living facility (N=10), a population that resembles our study's target population. In response to cognitive testing, we eliminated a question not interpreted easily by any of the participants, identified wording changes to clarify questions, simplified answer choices for ease of response (especially because questions are delivered verbally), and moved the most complicated (and potentially most embarrassing) question to the end, with more straightforward questions toward the beginning. We also substantially enlarged the image of a standard medication label to improve readability. Our final tool included 5 questions (Table 1).

The final instrument prompted coaches to record patient confusion. Additionally, the advance‐directive question included a refused to answer option and the medication question included unable to answer (needs glasses, too tired, etc.), a potential marker of low health literacy if used as an excuse to avoid embarrassment.[17]

Setting

We recruited inpatients at 5 Rhode Island acute‐care hospitals, including 1 community hospital, 3 teaching hospitals, and a tertiary‐care center and teaching hospital, ranging from 174 beds to 719 beds. Recruitment occurred from November 2010 to April 2011. The hospitals' respective institutional review boards approved the screening questions.

Study Population

We recruited a convenience sample of consecutively identified hospitalized Medicare fee‐for‐service beneficiaries, identified as (1) eligible for the subsequent behavioral intervention based on inpatient census lists and (2) willing to discuss an offer for a home‐based behavioral intervention. The behavioral intervention, based on the Care Transitions Intervention and described elsewhere,[3, 18] included a home visit and 2 phone calls (each about 1 hour). Coaches used a personal health record to help patients and/or caregivers better manage their health by (1) being able to list their active medical conditions and medications and (2) understanding warning signs indicating a need to reach out for help, including getting a timely medical appointment after hospitalization. The population for the present study included individuals approached to discuss participation in the behavioral intervention who also agreed to answer the screening questions.

Inclusion/Exclusion Criteria

We included hospitalized Medicare fee‐for‐service beneficiaries. We excluded patients who were current long‐term care residents, were to be discharged to long‐term or skilled care, or had a documented hospice referral. We also excluded patients with limited English proficiency or who were judged to have inadequate cognitive function, unless a caregiver agreed to receive the intervention as a proxy. We made these exclusions when recruiting for the behavioral intervention. Because we presented the screening questions to a subset of those approached for the behavioral intervention, we did not further exclude anyone. In other words, we offered the screening questions to all 295 people we approached during this study time period (100%).

Screening‐Question Study Process

Coaches asked patients to answer the 5 screening questions immediately after offering them the opportunity to participate in the behavioral intervention, regardless of whether or not they accepted the behavioral intervention. This study examines the subset of patients approached for the behavioral intervention who verbally consented to answer the screening questions.

Data Sources and Covariates

We analyzed primary data from the screening questions and behavioral intervention (for those who consented to participate), as well as Medicare claims and Medicaid enrollment data. We matched screening‐question data from November 2010 through April 2011 with Medicare Part A claims from October 2010 through May 2011 to calculate 30‐day readmission rates.

We obtained the following information for patients offered the behavioral intervention: (1) responses to screening questions, (2) whether patients consented to the behavioral intervention, (3) exposure to the behavioral intervention, and (4) recruitment date. Medicare claims data included (1) admission and discharge dates to calculate the length of stay, (2) index diagnosis, (3) hospital, and (4) site of discharge. Medicare enrollment data provided information on (1) Medicaid/Medicare dual‐eligibility status, (2) sex, and (3) patient‐reported race. We matched data based on patient name and date of birth. Our primary outcome was consent to the behavioral intervention. Secondarily, we reviewed posthospital utilization patterns, including hospital readmission, emergency‐department use, and use of home‐health services.

Statistical Analysis

We categorized patients into 2 groups (Figure 1): participants (consented to the behavioral intervention) and nonparticipants (eligible for the behavioral intervention but declined to participate). We excluded responses for those confused by the question (no response). For the response scales never, sometimes, almost always and not at all sure, somewhat sure, very sure, we isolated the most negative response, grouping the middle and most positive responses (Table 2). For the medication‐label question, we grouped incorrect and unable to answer (needs glasses, too tired, etc.) responses. We compared demographic differences between behavioral intervention participants and nonparticipants using 2 tests (categorical variables) and Student t tests (continuous variables). We then used multivariate logistic regression to analyze differences in consent to the behavioral intervention based on screening‐question responses, adjusting for demographics that differed significantly in the bivariate comparisons.

Figure 1
Study population.
Association Between Consent to Behavioral Intervention and Screening Question Response, by Question (N=260)
Screening‐Question ResponseAdjusted OR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio; Ref, reference.

  • Significant at P<0.05. Results do not add up to 260 responses in all questions due to the exclusion of confused by question from each response set.

In the last week, how often have you felt that you are unable to control the important things in your life?  
Out of control (Almost always)0.35 (0.14‐0.92)0.034a
In control (Sometimes, rarely)1.00 (Ref)
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them?  
Overwhelmed (Almost always)0.41 (0.16‐1.07)0.069
Not overwhelmed (Sometimes, rarely)1.00 (Ref)
How sure are you that you can go back to the way you felt before being hospitalized?  
Not confident (Not sure at all)0.17 (0.06‐0.45)0.001a
Confident (Somewhat sure, very sure)1.00 (Ref)
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself?  
No0.45 (0.13‐1.64)0.227
Yes1.00 (Ref)
How many times a day should someone take this medicine? (Show patient a medication label)  
Incorrect answer3.82 (1.12‐13.03)0.033a
Correct answer1.00 (Ref)
Confused by any question?  
Yes0.11 (0.05‐0.24)0.001a
No1.00 (Ref)

The authors used SAS version 9.2 (SAS Institute, Inc., Cary, NC) for all analyses.

RESULTS

Of the 295 patients asked to complete the screening questions, 260 (88.1%) consented to answer the screening questions and 35 (11.9%) declined. More than half of those who answered the screening questions consented to participate in the behavioral intervention (160; 61.5%) (Figure 1). When compared with nonparticipants, participants in the behavioral intervention were younger (25.6% age 85 years vs 40% age 85 years, P=0.028), had a longer average length of hospital stay (7.9 vs 6.1 days, P=0.008), were more likely to be discharged home without clinical services (35.0% vs 23.0%, P=0.041), and were unevenly distributed between the 5 recruitment‐site hospitals, coming primarily from the teaching hospitals (P<0.001) (Table 3). There were no significant differences based on race, sex, dual‐eligible Medicare/Medicaid status, presence of a caregiver, or index diagnosis.

Patient Characteristics by Behavioral Intervention Consent Status (N=260)
Patient CharacteristicsDeclined (n=100)Consented (n=160)P Value
  • NOTE: Abbreviations: CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; MI, myocardial infarction; SD, standard deviation.

  • Significant at P<0.05.

  • Eligible for both Medicare and Medicaid benefits.

  • Discharged home with no planned clinical services, as opposed to being discharged to home health care, hospice, or skilled care. Hospitals 35 are teaching hospitals

Male, n (%)34 (34.0)52 (32.5)0.803
Race, n (%)   
White94 (94.0)151 (94.4)0.691
Black2 (2.0)5 (3.1)
Other4 (4.0)4 (2.5)
Age, n (%), y   
<6517 (17.0)23 (14.4)0.028a
657414 (14.0)42 (26.3)
758429 (29.0)54 (33.8)
8540 (40.0)41 (25.6)
Dual eligible, n (%)b11 (11.0)24 (15.0)0.358
Caregiver present, n (%)17 (17.0)34 (21.3)0.401
Length of stay, mean (SD), d6.1 (4.1)7.9 (4.8)0.008a
Index diagnosis, n (%)   
Acute MI3 (3.0)6 (3.8)0.806
CHF6 (6.0)20 (12.5)0.111
Pneumonia7 (7.0)9 (5.6)0.572
COPD6 (6.0)6 (8.8)0.484
Discharged home without clinical services, n (%)c23 (23.0)56 (35.0)0.041a
Hospital site   
Hospital 115 (15.0)43 (26.9)<0.001a
Hospital 220 (20.0)26 (16.3)
Hospital 315 (15.0)23 (14.4)
Hospital 42 (2.0)48 (30.0)
Hospital 548 (48.0)20 (12.5)

Patients who identified themselves as being unable to control important things in their lives were 65% less likely to consent to the behavioral intervention than those in control (odds ratio [OR]: 0.35, 95% confidence interval [CI]: 0.14‐0.92), and those who did not feel confident about recovering were 83% less likely to consent (OR: 0.17, 95% CI: 0.06‐0.45). Individuals who were confused by any question were 89% less likely to consent (OR: 0.11, 95% CI: 0.05‐0.24). Individuals who answered the medication question incorrectly were 3 times more likely to consent (OR: 3.82, 95% CI: 1.12‐13.03). There were no significant differences in consent for feeling overwhelmed (difficulties piling up) or for having discussed advance care planning with family members or doctors.

We had insufficient power to detect significant differences in posthospital utilization (including hospital readmission, emergency‐department use, and receipt of home health), based on screening‐question responses (data not shown).

DISCUSSION

We find that patients who declined to participate in the behavioral intervention (eligible nonparticipants) differed from participants in 3 important ways: perceived stress, recovery expectation, and health literacy. As hypothesized, patients with higher perceived stress and lower recovery expectation were less likely to consent to the behavioral intervention, even after adjusting for demographic and healthcare‐utilization differences. Contrary to our hypothesis, patients who incorrectly answered the medication question were more likely to consent to the intervention than those who correctly answered.

Characterizing nonparticipants and participants can offer important insight into the limitations of the research that informs clinical guidelines and behavioral interventions. Such characteristics could also indicate how to better engage patients in interventions or other aspects of their care, if associated with lower rates of adherence to recommended health behaviors or treatment plans. For example, self‐efficacy (closely related to perceived stress) and hopelessness regarding clinical outcomes (similar to low recovery expectation in the present study) are associated with nonadherence to medication plans and other care in some populations.[5, 6] Other more extreme stress, like that following a major medical event, has also been associated with a lower rate of adherence to medication regimens and a resulting higher rate of hospital readmission and mortality.[19, 20] People with low health literacy (compared with adequate health literacy) are more likely to report being confused about their medications, requesting help to read medication labels and missing appointments due to trouble reading reminder cards.[9] Identifying these characteristics may assist providers in helping patients address adherence barriers by first accurately identifying the root of patient issues (eg, where the lack of confidence in recovery is rooted in lack of resources or social support), then potentially referring to community resources where possible. For example, some states (including Rhode Island, this study's location) may have Aging and Disability Resource Centers dedicated to linking elderly people with transportation, decision support, and other resources to support quality care.

The association between health literacy and intervention participation remains uncertain. Our question, which assessed interpretation of a prescription label as a health‐literacy proxy, may have given patients insight into their limited health literacy that motivated them to accept the subsequent behavioral intervention. Others have found that lowerhealth literacy patients want their providers to know that they did not understand some health words,[9] though they may be less likely to ask questions, request additional services, or seek new information during a medical encounter.[21] In our study, those who correctly answered the medication‐label question were almost mutually exclusive from those who were otherwise stressed (12% overlap; data not shown). Thus, patients who correctly answer this question may correctly realize that they do not need the support offered by the behavioral intervention and decline to participate. For other patients, perceived stress and poor recovery expectations may be more immediate and important determinants of declination, with patients too stressed to volunteer for another task, even if it involves much‐needed assistance.

The frequency with which patients were confused by the questions merits further comment and may also be driven by stress. Though each question seeks to identify the impact of a specific construct (Table 1), being confused by any question may reflect a more general (or subacute) level of cognitive impairment or generalized low health literacy not limited to the applied numeracy of the medication‐label question. We excluded confused responses to demonstrate more clearly the impact of each individual construct.

The impact of these characteristics may be affected by study design or other characteristics. One of the few studies to examine (via RCT) how methods affect consent found that participation decreased with increasing complexity of the consent process: written consent yielded the lowest participation, limited written consent was higher, and verbal consent was the highest.[10] Other tactics to increase consent include monetary incentives,[22] culturally sensitive materials,[7] telephone reminders,[23] an opt‐out instead of opt‐in approach,[23] and an open design where participants know which treatment they are receiving.[23] We do not know how these tactics relate to the characteristics captured in our screening questions, although other characteristics we measured, such as patients' self‐identified race, have been associated with intervention participation and access to care,[8, 24, 25] and patients who perceive that the benefit of the intervention outweighs expected risks and time requirements are more likely to consent.[4] We intentionally minimized the number of screening questions to encourage participation. The high rate of consent to our screening questions compared with consent to the (more involved) behavioral intervention reveals how sensitive patients are to the perceived invasiveness of an intervention.

We note several limitations. First, overall generalizability is limited due to our small sample size, use of consecutive convenience sampling, and exclusion criteria (eg, patients discharged to long‐term or skilled nursing care). And, these results may not apply to patients who are not hospitalized; hospitalized patients may have different motivations and stressors regarding their involvement in their care. Additionally, although we included as many people with mild cognitive impairment as possible by proxy through caregivers, we excluded some that did not have caregivers, potentially undermining the accuracy of how cognition impacts the choice to accept the behavioral intervention. Because researchers often explicitly exclude individuals based on cognitive impairment, differences between recruited subjects and the population at large may be particularly high among elderly patients, where up to half of the eligible population may be affected by cognitive impairment.[26] Further research into successfully engaging caregivers as a way to reach otherwise‐excluded patients with cognitive impairment can help to mitigate threats to generalizability. Finally, our screening questions are based on validated questions, but we rearranged our question wording, simplified answer choices, and removed them from their original context. Thus, the questions were not validated in our population or when administered in this manner. Although we conducted cognitive testing, further validity and reliability testing are necessary to translate these questions into a general screening tool. The medication‐label question also requires revision; in data collection and analysis, we assume that patients who were unable to answer (needs glasses, too tired, etc.) were masking an inability to respond correctly. Though the use of this excuse is cited in the literature,[17] we cannot be certain that our treatment of it in these screening questions is generalizable. Generalizability also applies to how we group responses. Isolating the most negative response (by grouping the middle answer with the most positive answer) most specifically identifies individuals more likely to need assistance and is therefore clinically pertinent, but this also potentially fails to identify individuals who also need help but do not choose the more extreme answer. Further research to refine the screening questions might also consider the timeframe of the perceived stress questions (past week rather than past month); this timeframe may be specific to the acute medical situation rather than general or unrelated perceived stress. Though this study cannot test this hypothesis, individuals with higher pre‐illness perceived stress may be more interested in addressing the issues that were stressors prior to acute illness, rather than the offered behavioral intervention. Additionally, some of the questions were highly correlated (Q1 and Q2) and indicate a potential for shortening the screening questionnaire.

Still, these findings further the discussion of how to identify and consent hospitalized patients for participation in behavioral interventions, both for research and for routine clinical care. Researchers should specifically consider how to engage individuals who are stressed and are not confident about recovery to improve reach and effectiveness. For example, interventions should prospectively collect data on stress and confidence in recovery and include protocols to support people who are positively identified with these characteristics. These characteristics may also offer insight into improving patient and caregiver engagement; more research is needed into characteristics related to patients' willingness to seek assistance in care. We are not the first to suggest that characteristics not observed in medical charts may impact patient completion or response to behavioral interventions,[27, 28] and considering differences between participants and eligible nonparticipants in clinical care delivery and interventions can strengthen the evidence base for clinical improvements, particularly related to patient self‐management. The implications are useful for both practicing clinicians and larger systems examining the comparativeness of patient interventions and generalizing results from RCTs.

Acknowledgments

The authors thank Phil Clark, PhD, and the SENIOR Project (Study of Exercise and Nutrition in Older Rhode Islanders) research team at the University of Rhode Island for formulating 1 of the screening questions, and Marissa Meucci for her assistance with the cognitive testing and formative research for the screening questions.

Disclosures

The analyses on which this study is based were performed by Healthcentric Advisors under contract HHSM 5002011‐RI10C, titled Utilization and Quality Control Peer Review for the State of Rhode Island, sponsored by the Centers for Medicare and Medicaid Services, US Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors report no conflicts of interest.

Files
References
  1. Boer SPM, Lenzen MJ, Oemrawsingh RM, et al. Evaluating the 'all‐comers' design: a comparison of participants in two 'all‐comers' PCI trials with non‐participants. Eur Heart J. 2011;32(17):21612167.
  2. Stirman SW, DeRubeis RJ, Crits‐Christoph P, Rothman A. Can the randomized controlled trial literature generalize to nonrandomized patients? J Consult Clin Psychol. 2005;73(1):127135.
  3. Voss R, Baier RR, Gardner R, Butterfield K, Lehrman S, Gravenstein S. The care transitions intervention: translating from efficacy to effectiveness. Arch Intern Med. 2011;171(14):12321237.
  4. Verheggen FW, Nieman F, Jonkers R. Determinants of patient participation in clinical studies requiring informed consent: why patients enter a clinical trial. Patient Educ Couns. 1998;35(2):111125.
  5. Kamau TM, Olson VG, Zipp GP, Clark M. Coping self‐efficacy as a predictor of adherence to antiretroviral therapy in men and women living with HIV in Kenya. AIDS Patient Care STDS. 2011;25(9):557561.
  6. Joyner‐Grantham J, Mount DL, McCorkle OD, Simmons DR, Ferrario CM, Cline DM. Self‐reported influences of hopelessness, health literacy, lifestyle action, and patient inertia on blood pressure control in a hypertensive emergency department population. Am J Med Sci. 2009;338(5):368372.
  7. Watson JM, Torgerson DJ. Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6:34.
  8. Cooke CR, Erickson SE, Watkins TR, Matthay MA, Hudson LD, Rubenfeld GD. Age‐, sex‐, and race‐based differences among patients enrolled versus not enrolled in acute lung injury clinical trials. Crit Care Med. 2010;38(6):14501457.
  9. Wolf MS, Williams MV, Parker RM, Parikh NS, Nowlan AW, Baker DW. Patients' shame and attitudes toward discussing the results of literacy screening. J Health Commun. 2007;12(8):721732.
  10. Marco CA. Impact of detailed informed consent on research subjects' participation: a prospective, randomized trial. J Emerg Med. 2008;34(3):269275.
  11. Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav. 1983;24:385396. Available at: http://www.psy.cmu.edu/∼scohen/globalmeas83.pdf. Accessed May 10, 2012.
  12. .Pfizer, Inc. Clear Health Communication: The Newest Vital Sign. Available at: http://www.pfizerhealthliteracy.com/asset/pdf/NVS_Eng/files/nvs_flipbook_english_final.pdf. Accessed May 10, 2012.
  13. White S, Chen J, Atchison R. Relationship of preventive health practices and health literacy: a national study. Am J Health Behav. 2008;32(3):227242.
  14. Kripalani S, Henderson LE, Chiu EY, Robertson R, Kolm P, Jacobson TA. Predictors of medication self‐management skill in a low‐literacy population. J Gen Intern Med. 2006;21:852856.
  15. Mondloch MV, Cole DC, Frank JW. Does how you do depend on how you think you'll do? A systematic review of the evidence for a relation between patients' recovery expectations and health outcomes [published correction appears in CMAJ. 2001;165(10):1303]. CMAJ. 2001;165(2):174179.
  16. Zhang B, Wright AA, Huskamp HA, et al. Health care costs in the last week of life: associations with end‐of‐life conversations. Arch Intern Med. 2009;169(5):480488.
  17. Emmerton LM, Mampallil L, Kairuz T, McKauge LM, Bush RA. Exploring health literacy competencies in community pharmacy. Health Expect. 2010;15(1):1222.
  18. Coleman EA, Parry C, Chalmers S, Min SJ. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):18221828.
  19. Shemesh E, Rudnick A, Kaluski E, et al. A prospective study of posttraumatic stress symptoms and non‐adherence in survivors of a myocardial infarction (MI). Gen Hosp Psychiatry. 2001;23:215222.
  20. Shemesh E, Yehuda R, Milo O, et al. Posttraumatic stress, non‐adherence, and adverse outcomes in survivors of a myocardial infarction. Psychosom Med. 2004;66:521526.
  21. Hironaka LK, Paasche‐Orlow MK. The implications of health literacy on patient‐provider communication. Arch Dis Child. 2008;93:428432.
  22. Mapstone J, Elbourne D, Roberts IG. Strategies to improve recruitment to research studies. Cochrane Database Syst Rev. 2007;2:MR000013.
  23. Treweek S, Pitkethly M, Cook J, et al. Strategies to improve recruitment to randomised controlled trials. Cochrane Database Syst Rev. 2010;4:MR000013.
  24. Wilkes AE, Bordenave K, Vinci L, Peek ME. Addressing diabetes racial and ethnic disparities: lessons learned from quality improvement collaboratives. Diabetes Manag (Lond). 2011;1(6):653660.
  25. King AC, Cao D, Southard CC, Matthews A. Racial differences in eligibility and enrollment in a smoking cessation clinical trial. Health Psychol. 2011;30(1):4048.
  26. Taylor JS, DeMers SM, Vig EK, Borson MD. The disappearing subject: exclusion of people with cognitive impairment from research. J Am Geriatr Soc. 2012;6:413419.
  27. Brazier JE, Dixon S, Ratcliffe J. The role of patient preferences in cost‐effectiveness analysis: a conflict of values? Pharmacoeconomics. 2009;27(9):705712.
  28. Fan VS, Gaziano JM, Lew R, et al. A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations. Ann Intern Med. 2012;156(10):673683.
  29. Flood AB, Lorence DP, Ding J, McPherson K, Black NA. The role of expectations in patients' reports of post‐operative outcomes and improvement following therapy. Med Care. 1993;31:10431056.
  30. Borkan JM, Quirk M. Expectations and outcomes after hip fracture among the elderly. Int J Aging Hum Dev. 1992;34:339350.
  31. Petrie KJ, Weinman J, Sharpe N, Buckley J. Role of patients' view of their illness in predicting return to work and functioning after myocardial infarction: longitudinal study. BMJ. 1996;312:11911194.
Article PDF
Issue
Journal of Hospital Medicine - 8(4)
Page Number
208-214
Sections
Files
Files
Article PDF
Article PDF

Randomized controlled trials (RCTs) generally provide the most rigorous evidence for clinical practice guidelines and quality‐improvement initiatives. However, 2 major shortcomings limit the ability to broadly apply these results to the general population. One has to do with sampling bias (due to subject consent and inclusion/exclusion criteria) and the other with potential differences between participants and eligible nonparticipants. The latter may be of particular importance in trials of behavioral interventions (rather than medication trials), which often require substantial participant effort.

First, individuals who provide written consent to participate in RCTs of behavioral interventions typically represent a minority of those approached and therefore may not be representative of the target population. Although the consenting proportion is often not disclosed, some estimate that only 35%50% of eligible subjects typically participate.[1, 2, 3] These estimates mirror the authors' prior experience with a 55.2% consent rate among subjects approached for a Medicare quality‐improvement behavioral intervention.[3] Though the literature is sparse, it suggests that eligible individuals who decline to participate in either interventions or usual care may differ from participants in their perception of intervention risks and effort[4] or in their levels of self‐efficacy or confidence in recovery.[5, 6] Relatively low enrollment rates mean that much of the population remains unstudied; however, evidence‐based interventions are often applied to populations broader than those included in the original analyses.

Additionally, although some nonparticipants may correctly decide that they do not need the assistance of a proposed intervention and therefore decline to participate, others may inappropriately judge the intervention's potential benefit and applicability when declining. In other words, electing to not participate in a study, despite eligibility, may reflect more than a refusal of inconvenience, disinterest, or desire to contribute to knowledge; for some individuals it may offer a proxy statement about health knowledge, personal beliefs, attitudes, and needs, including perceived stress,[5] cultural relevance,[7, 8] and literacy/health literacy.[9, 10] Characterizing these patients can help us to modify recruitment approaches and improve participation so that participants better represent the target population. If these differences also relate to patients' adherence to care recommendations, a more nuanced understanding could improve ways to identify and engage potentially nonadherent patients to improve health outcomes.

We hypothesized that we could identify characteristics that differ between behavioral‐intervention participants and eligible nonparticipants using a set of screening questions. We proposed that these characteristics, including constructs related to perceived stress, recovery expectation, health literacy, insight, and action into advance care planning and confusion by any question, would predict the likelihood of consenting to a behavioral intervention requiring substantial subject engagement. Some of these characteristics may relate to adherence to preventive care or treatment recommendations. We did not specifically hypothesize about the distribution of demographic differences.

METHODS

Study Design

Prospective observational study conducted within a larger behavioral intervention.

Screening Question Design

We adapted our screening questions from several previously validated surveys, selecting questions related to perceived stress and self‐efficacy,[11] recovery expectations, health literacy/medication label interpretation,[12] and discussing advance directives (Table 1). Some of these characteristics may relate to adherence to preventive care or treatment programs[13, 14] or to clinical outcomes.[15, 16]

Screening Questions
Screening QuestionAdapted From Original Validated QuestionSourceConstruct
In the last week, how often have you felt that you are unable to control the important things in your life? (Rarely, sometimes, almost always)In the last month, how often have you felt that you were unable to control the important things in your life? (Never, almost never, sometimes, fairly often, very often)Adapted from the Perceived Stress Scale (PSS‐14).[11]Perceived stress, self‐efficacy
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? (Rarely, sometimes, almost always)In the last month, how often have you felt difficulties were piling up so high that you could not overcome them? (Never, almost never, sometimes, fairly often, very often)Adapted from the Perceived Stress Scale (PSS‐14).[11]Perceived stress, self‐efficacy
How sure are you that you can go back to the way you felt before being hospitalized? (Not sure at all, somewhat sure, very sure) Courtesy of Phil Clark, PhD, University of Rhode Island, drawing on research on resilience. Similar questions are used in other studies, including studies of postsurgical recovery.[29, 30, 31]Recovery expectation, resilience
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? (Yes, no) Based on consumer‐targeted materials on advance care planning. http://www.agingwithdignity.org/five‐wishes.php; http://www.nhqualitycampaign.org/files/emmpguides/6_AdvanceCarePlanning_TAW_Guide.pdfAdvance care planning
(Show patient a picture of prescription label.) How many times a day should someone take this medicine? (Correct, incorrect)(Show patient a picture of ice cream label.) If you eat the entire container, how many calories will you eat? (Correct, incorrect)Adapted from Pfizer's Clear Health Communication: The Newest Vital Sign.[12]Health literacy

Prior to administering the screening questions, we performed cognitive testing with residents of an assisted‐living facility (N=10), a population that resembles our study's target population. In response to cognitive testing, we eliminated a question not interpreted easily by any of the participants, identified wording changes to clarify questions, simplified answer choices for ease of response (especially because questions are delivered verbally), and moved the most complicated (and potentially most embarrassing) question to the end, with more straightforward questions toward the beginning. We also substantially enlarged the image of a standard medication label to improve readability. Our final tool included 5 questions (Table 1).

The final instrument prompted coaches to record patient confusion. Additionally, the advance‐directive question included a refused to answer option and the medication question included unable to answer (needs glasses, too tired, etc.), a potential marker of low health literacy if used as an excuse to avoid embarrassment.[17]

Setting

We recruited inpatients at 5 Rhode Island acute‐care hospitals, including 1 community hospital, 3 teaching hospitals, and a tertiary‐care center and teaching hospital, ranging from 174 beds to 719 beds. Recruitment occurred from November 2010 to April 2011. The hospitals' respective institutional review boards approved the screening questions.

Study Population

We recruited a convenience sample of consecutively identified hospitalized Medicare fee‐for‐service beneficiaries, identified as (1) eligible for the subsequent behavioral intervention based on inpatient census lists and (2) willing to discuss an offer for a home‐based behavioral intervention. The behavioral intervention, based on the Care Transitions Intervention and described elsewhere,[3, 18] included a home visit and 2 phone calls (each about 1 hour). Coaches used a personal health record to help patients and/or caregivers better manage their health by (1) being able to list their active medical conditions and medications and (2) understanding warning signs indicating a need to reach out for help, including getting a timely medical appointment after hospitalization. The population for the present study included individuals approached to discuss participation in the behavioral intervention who also agreed to answer the screening questions.

Inclusion/Exclusion Criteria

We included hospitalized Medicare fee‐for‐service beneficiaries. We excluded patients who were current long‐term care residents, were to be discharged to long‐term or skilled care, or had a documented hospice referral. We also excluded patients with limited English proficiency or who were judged to have inadequate cognitive function, unless a caregiver agreed to receive the intervention as a proxy. We made these exclusions when recruiting for the behavioral intervention. Because we presented the screening questions to a subset of those approached for the behavioral intervention, we did not further exclude anyone. In other words, we offered the screening questions to all 295 people we approached during this study time period (100%).

Screening‐Question Study Process

Coaches asked patients to answer the 5 screening questions immediately after offering them the opportunity to participate in the behavioral intervention, regardless of whether or not they accepted the behavioral intervention. This study examines the subset of patients approached for the behavioral intervention who verbally consented to answer the screening questions.

Data Sources and Covariates

We analyzed primary data from the screening questions and behavioral intervention (for those who consented to participate), as well as Medicare claims and Medicaid enrollment data. We matched screening‐question data from November 2010 through April 2011 with Medicare Part A claims from October 2010 through May 2011 to calculate 30‐day readmission rates.

We obtained the following information for patients offered the behavioral intervention: (1) responses to screening questions, (2) whether patients consented to the behavioral intervention, (3) exposure to the behavioral intervention, and (4) recruitment date. Medicare claims data included (1) admission and discharge dates to calculate the length of stay, (2) index diagnosis, (3) hospital, and (4) site of discharge. Medicare enrollment data provided information on (1) Medicaid/Medicare dual‐eligibility status, (2) sex, and (3) patient‐reported race. We matched data based on patient name and date of birth. Our primary outcome was consent to the behavioral intervention. Secondarily, we reviewed posthospital utilization patterns, including hospital readmission, emergency‐department use, and use of home‐health services.

Statistical Analysis

We categorized patients into 2 groups (Figure 1): participants (consented to the behavioral intervention) and nonparticipants (eligible for the behavioral intervention but declined to participate). We excluded responses for those confused by the question (no response). For the response scales never, sometimes, almost always and not at all sure, somewhat sure, very sure, we isolated the most negative response, grouping the middle and most positive responses (Table 2). For the medication‐label question, we grouped incorrect and unable to answer (needs glasses, too tired, etc.) responses. We compared demographic differences between behavioral intervention participants and nonparticipants using 2 tests (categorical variables) and Student t tests (continuous variables). We then used multivariate logistic regression to analyze differences in consent to the behavioral intervention based on screening‐question responses, adjusting for demographics that differed significantly in the bivariate comparisons.

Figure 1
Study population.
Association Between Consent to Behavioral Intervention and Screening Question Response, by Question (N=260)
Screening‐Question ResponseAdjusted OR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio; Ref, reference.

  • Significant at P<0.05. Results do not add up to 260 responses in all questions due to the exclusion of confused by question from each response set.

In the last week, how often have you felt that you are unable to control the important things in your life?  
Out of control (Almost always)0.35 (0.14‐0.92)0.034a
In control (Sometimes, rarely)1.00 (Ref)
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them?  
Overwhelmed (Almost always)0.41 (0.16‐1.07)0.069
Not overwhelmed (Sometimes, rarely)1.00 (Ref)
How sure are you that you can go back to the way you felt before being hospitalized?  
Not confident (Not sure at all)0.17 (0.06‐0.45)0.001a
Confident (Somewhat sure, very sure)1.00 (Ref)
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself?  
No0.45 (0.13‐1.64)0.227
Yes1.00 (Ref)
How many times a day should someone take this medicine? (Show patient a medication label)  
Incorrect answer3.82 (1.12‐13.03)0.033a
Correct answer1.00 (Ref)
Confused by any question?  
Yes0.11 (0.05‐0.24)0.001a
No1.00 (Ref)

The authors used SAS version 9.2 (SAS Institute, Inc., Cary, NC) for all analyses.

RESULTS

Of the 295 patients asked to complete the screening questions, 260 (88.1%) consented to answer the screening questions and 35 (11.9%) declined. More than half of those who answered the screening questions consented to participate in the behavioral intervention (160; 61.5%) (Figure 1). When compared with nonparticipants, participants in the behavioral intervention were younger (25.6% age 85 years vs 40% age 85 years, P=0.028), had a longer average length of hospital stay (7.9 vs 6.1 days, P=0.008), were more likely to be discharged home without clinical services (35.0% vs 23.0%, P=0.041), and were unevenly distributed between the 5 recruitment‐site hospitals, coming primarily from the teaching hospitals (P<0.001) (Table 3). There were no significant differences based on race, sex, dual‐eligible Medicare/Medicaid status, presence of a caregiver, or index diagnosis.

Patient Characteristics by Behavioral Intervention Consent Status (N=260)
Patient CharacteristicsDeclined (n=100)Consented (n=160)P Value
  • NOTE: Abbreviations: CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; MI, myocardial infarction; SD, standard deviation.

  • Significant at P<0.05.

  • Eligible for both Medicare and Medicaid benefits.

  • Discharged home with no planned clinical services, as opposed to being discharged to home health care, hospice, or skilled care. Hospitals 35 are teaching hospitals

Male, n (%)34 (34.0)52 (32.5)0.803
Race, n (%)   
White94 (94.0)151 (94.4)0.691
Black2 (2.0)5 (3.1)
Other4 (4.0)4 (2.5)
Age, n (%), y   
<6517 (17.0)23 (14.4)0.028a
657414 (14.0)42 (26.3)
758429 (29.0)54 (33.8)
8540 (40.0)41 (25.6)
Dual eligible, n (%)b11 (11.0)24 (15.0)0.358
Caregiver present, n (%)17 (17.0)34 (21.3)0.401
Length of stay, mean (SD), d6.1 (4.1)7.9 (4.8)0.008a
Index diagnosis, n (%)   
Acute MI3 (3.0)6 (3.8)0.806
CHF6 (6.0)20 (12.5)0.111
Pneumonia7 (7.0)9 (5.6)0.572
COPD6 (6.0)6 (8.8)0.484
Discharged home without clinical services, n (%)c23 (23.0)56 (35.0)0.041a
Hospital site   
Hospital 115 (15.0)43 (26.9)<0.001a
Hospital 220 (20.0)26 (16.3)
Hospital 315 (15.0)23 (14.4)
Hospital 42 (2.0)48 (30.0)
Hospital 548 (48.0)20 (12.5)

Patients who identified themselves as being unable to control important things in their lives were 65% less likely to consent to the behavioral intervention than those in control (odds ratio [OR]: 0.35, 95% confidence interval [CI]: 0.14‐0.92), and those who did not feel confident about recovering were 83% less likely to consent (OR: 0.17, 95% CI: 0.06‐0.45). Individuals who were confused by any question were 89% less likely to consent (OR: 0.11, 95% CI: 0.05‐0.24). Individuals who answered the medication question incorrectly were 3 times more likely to consent (OR: 3.82, 95% CI: 1.12‐13.03). There were no significant differences in consent for feeling overwhelmed (difficulties piling up) or for having discussed advance care planning with family members or doctors.

We had insufficient power to detect significant differences in posthospital utilization (including hospital readmission, emergency‐department use, and receipt of home health), based on screening‐question responses (data not shown).

DISCUSSION

We find that patients who declined to participate in the behavioral intervention (eligible nonparticipants) differed from participants in 3 important ways: perceived stress, recovery expectation, and health literacy. As hypothesized, patients with higher perceived stress and lower recovery expectation were less likely to consent to the behavioral intervention, even after adjusting for demographic and healthcare‐utilization differences. Contrary to our hypothesis, patients who incorrectly answered the medication question were more likely to consent to the intervention than those who correctly answered.

Characterizing nonparticipants and participants can offer important insight into the limitations of the research that informs clinical guidelines and behavioral interventions. Such characteristics could also indicate how to better engage patients in interventions or other aspects of their care, if associated with lower rates of adherence to recommended health behaviors or treatment plans. For example, self‐efficacy (closely related to perceived stress) and hopelessness regarding clinical outcomes (similar to low recovery expectation in the present study) are associated with nonadherence to medication plans and other care in some populations.[5, 6] Other more extreme stress, like that following a major medical event, has also been associated with a lower rate of adherence to medication regimens and a resulting higher rate of hospital readmission and mortality.[19, 20] People with low health literacy (compared with adequate health literacy) are more likely to report being confused about their medications, requesting help to read medication labels and missing appointments due to trouble reading reminder cards.[9] Identifying these characteristics may assist providers in helping patients address adherence barriers by first accurately identifying the root of patient issues (eg, where the lack of confidence in recovery is rooted in lack of resources or social support), then potentially referring to community resources where possible. For example, some states (including Rhode Island, this study's location) may have Aging and Disability Resource Centers dedicated to linking elderly people with transportation, decision support, and other resources to support quality care.

The association between health literacy and intervention participation remains uncertain. Our question, which assessed interpretation of a prescription label as a health‐literacy proxy, may have given patients insight into their limited health literacy that motivated them to accept the subsequent behavioral intervention. Others have found that lowerhealth literacy patients want their providers to know that they did not understand some health words,[9] though they may be less likely to ask questions, request additional services, or seek new information during a medical encounter.[21] In our study, those who correctly answered the medication‐label question were almost mutually exclusive from those who were otherwise stressed (12% overlap; data not shown). Thus, patients who correctly answer this question may correctly realize that they do not need the support offered by the behavioral intervention and decline to participate. For other patients, perceived stress and poor recovery expectations may be more immediate and important determinants of declination, with patients too stressed to volunteer for another task, even if it involves much‐needed assistance.

The frequency with which patients were confused by the questions merits further comment and may also be driven by stress. Though each question seeks to identify the impact of a specific construct (Table 1), being confused by any question may reflect a more general (or subacute) level of cognitive impairment or generalized low health literacy not limited to the applied numeracy of the medication‐label question. We excluded confused responses to demonstrate more clearly the impact of each individual construct.

The impact of these characteristics may be affected by study design or other characteristics. One of the few studies to examine (via RCT) how methods affect consent found that participation decreased with increasing complexity of the consent process: written consent yielded the lowest participation, limited written consent was higher, and verbal consent was the highest.[10] Other tactics to increase consent include monetary incentives,[22] culturally sensitive materials,[7] telephone reminders,[23] an opt‐out instead of opt‐in approach,[23] and an open design where participants know which treatment they are receiving.[23] We do not know how these tactics relate to the characteristics captured in our screening questions, although other characteristics we measured, such as patients' self‐identified race, have been associated with intervention participation and access to care,[8, 24, 25] and patients who perceive that the benefit of the intervention outweighs expected risks and time requirements are more likely to consent.[4] We intentionally minimized the number of screening questions to encourage participation. The high rate of consent to our screening questions compared with consent to the (more involved) behavioral intervention reveals how sensitive patients are to the perceived invasiveness of an intervention.

We note several limitations. First, overall generalizability is limited due to our small sample size, use of consecutive convenience sampling, and exclusion criteria (eg, patients discharged to long‐term or skilled nursing care). And, these results may not apply to patients who are not hospitalized; hospitalized patients may have different motivations and stressors regarding their involvement in their care. Additionally, although we included as many people with mild cognitive impairment as possible by proxy through caregivers, we excluded some that did not have caregivers, potentially undermining the accuracy of how cognition impacts the choice to accept the behavioral intervention. Because researchers often explicitly exclude individuals based on cognitive impairment, differences between recruited subjects and the population at large may be particularly high among elderly patients, where up to half of the eligible population may be affected by cognitive impairment.[26] Further research into successfully engaging caregivers as a way to reach otherwise‐excluded patients with cognitive impairment can help to mitigate threats to generalizability. Finally, our screening questions are based on validated questions, but we rearranged our question wording, simplified answer choices, and removed them from their original context. Thus, the questions were not validated in our population or when administered in this manner. Although we conducted cognitive testing, further validity and reliability testing are necessary to translate these questions into a general screening tool. The medication‐label question also requires revision; in data collection and analysis, we assume that patients who were unable to answer (needs glasses, too tired, etc.) were masking an inability to respond correctly. Though the use of this excuse is cited in the literature,[17] we cannot be certain that our treatment of it in these screening questions is generalizable. Generalizability also applies to how we group responses. Isolating the most negative response (by grouping the middle answer with the most positive answer) most specifically identifies individuals more likely to need assistance and is therefore clinically pertinent, but this also potentially fails to identify individuals who also need help but do not choose the more extreme answer. Further research to refine the screening questions might also consider the timeframe of the perceived stress questions (past week rather than past month); this timeframe may be specific to the acute medical situation rather than general or unrelated perceived stress. Though this study cannot test this hypothesis, individuals with higher pre‐illness perceived stress may be more interested in addressing the issues that were stressors prior to acute illness, rather than the offered behavioral intervention. Additionally, some of the questions were highly correlated (Q1 and Q2) and indicate a potential for shortening the screening questionnaire.

Still, these findings further the discussion of how to identify and consent hospitalized patients for participation in behavioral interventions, both for research and for routine clinical care. Researchers should specifically consider how to engage individuals who are stressed and are not confident about recovery to improve reach and effectiveness. For example, interventions should prospectively collect data on stress and confidence in recovery and include protocols to support people who are positively identified with these characteristics. These characteristics may also offer insight into improving patient and caregiver engagement; more research is needed into characteristics related to patients' willingness to seek assistance in care. We are not the first to suggest that characteristics not observed in medical charts may impact patient completion or response to behavioral interventions,[27, 28] and considering differences between participants and eligible nonparticipants in clinical care delivery and interventions can strengthen the evidence base for clinical improvements, particularly related to patient self‐management. The implications are useful for both practicing clinicians and larger systems examining the comparativeness of patient interventions and generalizing results from RCTs.

Acknowledgments

The authors thank Phil Clark, PhD, and the SENIOR Project (Study of Exercise and Nutrition in Older Rhode Islanders) research team at the University of Rhode Island for formulating 1 of the screening questions, and Marissa Meucci for her assistance with the cognitive testing and formative research for the screening questions.

Disclosures

The analyses on which this study is based were performed by Healthcentric Advisors under contract HHSM 5002011‐RI10C, titled Utilization and Quality Control Peer Review for the State of Rhode Island, sponsored by the Centers for Medicare and Medicaid Services, US Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors report no conflicts of interest.

Randomized controlled trials (RCTs) generally provide the most rigorous evidence for clinical practice guidelines and quality‐improvement initiatives. However, 2 major shortcomings limit the ability to broadly apply these results to the general population. One has to do with sampling bias (due to subject consent and inclusion/exclusion criteria) and the other with potential differences between participants and eligible nonparticipants. The latter may be of particular importance in trials of behavioral interventions (rather than medication trials), which often require substantial participant effort.

First, individuals who provide written consent to participate in RCTs of behavioral interventions typically represent a minority of those approached and therefore may not be representative of the target population. Although the consenting proportion is often not disclosed, some estimate that only 35%50% of eligible subjects typically participate.[1, 2, 3] These estimates mirror the authors' prior experience with a 55.2% consent rate among subjects approached for a Medicare quality‐improvement behavioral intervention.[3] Though the literature is sparse, it suggests that eligible individuals who decline to participate in either interventions or usual care may differ from participants in their perception of intervention risks and effort[4] or in their levels of self‐efficacy or confidence in recovery.[5, 6] Relatively low enrollment rates mean that much of the population remains unstudied; however, evidence‐based interventions are often applied to populations broader than those included in the original analyses.

Additionally, although some nonparticipants may correctly decide that they do not need the assistance of a proposed intervention and therefore decline to participate, others may inappropriately judge the intervention's potential benefit and applicability when declining. In other words, electing to not participate in a study, despite eligibility, may reflect more than a refusal of inconvenience, disinterest, or desire to contribute to knowledge; for some individuals it may offer a proxy statement about health knowledge, personal beliefs, attitudes, and needs, including perceived stress,[5] cultural relevance,[7, 8] and literacy/health literacy.[9, 10] Characterizing these patients can help us to modify recruitment approaches and improve participation so that participants better represent the target population. If these differences also relate to patients' adherence to care recommendations, a more nuanced understanding could improve ways to identify and engage potentially nonadherent patients to improve health outcomes.

We hypothesized that we could identify characteristics that differ between behavioral‐intervention participants and eligible nonparticipants using a set of screening questions. We proposed that these characteristics, including constructs related to perceived stress, recovery expectation, health literacy, insight, and action into advance care planning and confusion by any question, would predict the likelihood of consenting to a behavioral intervention requiring substantial subject engagement. Some of these characteristics may relate to adherence to preventive care or treatment recommendations. We did not specifically hypothesize about the distribution of demographic differences.

METHODS

Study Design

Prospective observational study conducted within a larger behavioral intervention.

Screening Question Design

We adapted our screening questions from several previously validated surveys, selecting questions related to perceived stress and self‐efficacy,[11] recovery expectations, health literacy/medication label interpretation,[12] and discussing advance directives (Table 1). Some of these characteristics may relate to adherence to preventive care or treatment programs[13, 14] or to clinical outcomes.[15, 16]

Screening Questions
Screening QuestionAdapted From Original Validated QuestionSourceConstruct
In the last week, how often have you felt that you are unable to control the important things in your life? (Rarely, sometimes, almost always)In the last month, how often have you felt that you were unable to control the important things in your life? (Never, almost never, sometimes, fairly often, very often)Adapted from the Perceived Stress Scale (PSS‐14).[11]Perceived stress, self‐efficacy
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? (Rarely, sometimes, almost always)In the last month, how often have you felt difficulties were piling up so high that you could not overcome them? (Never, almost never, sometimes, fairly often, very often)Adapted from the Perceived Stress Scale (PSS‐14).[11]Perceived stress, self‐efficacy
How sure are you that you can go back to the way you felt before being hospitalized? (Not sure at all, somewhat sure, very sure) Courtesy of Phil Clark, PhD, University of Rhode Island, drawing on research on resilience. Similar questions are used in other studies, including studies of postsurgical recovery.[29, 30, 31]Recovery expectation, resilience
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? (Yes, no) Based on consumer‐targeted materials on advance care planning. http://www.agingwithdignity.org/five‐wishes.php; http://www.nhqualitycampaign.org/files/emmpguides/6_AdvanceCarePlanning_TAW_Guide.pdfAdvance care planning
(Show patient a picture of prescription label.) How many times a day should someone take this medicine? (Correct, incorrect)(Show patient a picture of ice cream label.) If you eat the entire container, how many calories will you eat? (Correct, incorrect)Adapted from Pfizer's Clear Health Communication: The Newest Vital Sign.[12]Health literacy

Prior to administering the screening questions, we performed cognitive testing with residents of an assisted‐living facility (N=10), a population that resembles our study's target population. In response to cognitive testing, we eliminated a question not interpreted easily by any of the participants, identified wording changes to clarify questions, simplified answer choices for ease of response (especially because questions are delivered verbally), and moved the most complicated (and potentially most embarrassing) question to the end, with more straightforward questions toward the beginning. We also substantially enlarged the image of a standard medication label to improve readability. Our final tool included 5 questions (Table 1).

The final instrument prompted coaches to record patient confusion. Additionally, the advance‐directive question included a refused to answer option and the medication question included unable to answer (needs glasses, too tired, etc.), a potential marker of low health literacy if used as an excuse to avoid embarrassment.[17]

Setting

We recruited inpatients at 5 Rhode Island acute‐care hospitals, including 1 community hospital, 3 teaching hospitals, and a tertiary‐care center and teaching hospital, ranging from 174 beds to 719 beds. Recruitment occurred from November 2010 to April 2011. The hospitals' respective institutional review boards approved the screening questions.

Study Population

We recruited a convenience sample of consecutively identified hospitalized Medicare fee‐for‐service beneficiaries, identified as (1) eligible for the subsequent behavioral intervention based on inpatient census lists and (2) willing to discuss an offer for a home‐based behavioral intervention. The behavioral intervention, based on the Care Transitions Intervention and described elsewhere,[3, 18] included a home visit and 2 phone calls (each about 1 hour). Coaches used a personal health record to help patients and/or caregivers better manage their health by (1) being able to list their active medical conditions and medications and (2) understanding warning signs indicating a need to reach out for help, including getting a timely medical appointment after hospitalization. The population for the present study included individuals approached to discuss participation in the behavioral intervention who also agreed to answer the screening questions.

Inclusion/Exclusion Criteria

We included hospitalized Medicare fee‐for‐service beneficiaries. We excluded patients who were current long‐term care residents, were to be discharged to long‐term or skilled care, or had a documented hospice referral. We also excluded patients with limited English proficiency or who were judged to have inadequate cognitive function, unless a caregiver agreed to receive the intervention as a proxy. We made these exclusions when recruiting for the behavioral intervention. Because we presented the screening questions to a subset of those approached for the behavioral intervention, we did not further exclude anyone. In other words, we offered the screening questions to all 295 people we approached during this study time period (100%).

Screening‐Question Study Process

Coaches asked patients to answer the 5 screening questions immediately after offering them the opportunity to participate in the behavioral intervention, regardless of whether or not they accepted the behavioral intervention. This study examines the subset of patients approached for the behavioral intervention who verbally consented to answer the screening questions.

Data Sources and Covariates

We analyzed primary data from the screening questions and behavioral intervention (for those who consented to participate), as well as Medicare claims and Medicaid enrollment data. We matched screening‐question data from November 2010 through April 2011 with Medicare Part A claims from October 2010 through May 2011 to calculate 30‐day readmission rates.

We obtained the following information for patients offered the behavioral intervention: (1) responses to screening questions, (2) whether patients consented to the behavioral intervention, (3) exposure to the behavioral intervention, and (4) recruitment date. Medicare claims data included (1) admission and discharge dates to calculate the length of stay, (2) index diagnosis, (3) hospital, and (4) site of discharge. Medicare enrollment data provided information on (1) Medicaid/Medicare dual‐eligibility status, (2) sex, and (3) patient‐reported race. We matched data based on patient name and date of birth. Our primary outcome was consent to the behavioral intervention. Secondarily, we reviewed posthospital utilization patterns, including hospital readmission, emergency‐department use, and use of home‐health services.

Statistical Analysis

We categorized patients into 2 groups (Figure 1): participants (consented to the behavioral intervention) and nonparticipants (eligible for the behavioral intervention but declined to participate). We excluded responses for those confused by the question (no response). For the response scales never, sometimes, almost always and not at all sure, somewhat sure, very sure, we isolated the most negative response, grouping the middle and most positive responses (Table 2). For the medication‐label question, we grouped incorrect and unable to answer (needs glasses, too tired, etc.) responses. We compared demographic differences between behavioral intervention participants and nonparticipants using 2 tests (categorical variables) and Student t tests (continuous variables). We then used multivariate logistic regression to analyze differences in consent to the behavioral intervention based on screening‐question responses, adjusting for demographics that differed significantly in the bivariate comparisons.

Figure 1
Study population.
Association Between Consent to Behavioral Intervention and Screening Question Response, by Question (N=260)
Screening‐Question ResponseAdjusted OR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio; Ref, reference.

  • Significant at P<0.05. Results do not add up to 260 responses in all questions due to the exclusion of confused by question from each response set.

In the last week, how often have you felt that you are unable to control the important things in your life?  
Out of control (Almost always)0.35 (0.14‐0.92)0.034a
In control (Sometimes, rarely)1.00 (Ref)
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them?  
Overwhelmed (Almost always)0.41 (0.16‐1.07)0.069
Not overwhelmed (Sometimes, rarely)1.00 (Ref)
How sure are you that you can go back to the way you felt before being hospitalized?  
Not confident (Not sure at all)0.17 (0.06‐0.45)0.001a
Confident (Somewhat sure, very sure)1.00 (Ref)
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself?  
No0.45 (0.13‐1.64)0.227
Yes1.00 (Ref)
How many times a day should someone take this medicine? (Show patient a medication label)  
Incorrect answer3.82 (1.12‐13.03)0.033a
Correct answer1.00 (Ref)
Confused by any question?  
Yes0.11 (0.05‐0.24)0.001a
No1.00 (Ref)

The authors used SAS version 9.2 (SAS Institute, Inc., Cary, NC) for all analyses.

RESULTS

Of the 295 patients asked to complete the screening questions, 260 (88.1%) consented to answer the screening questions and 35 (11.9%) declined. More than half of those who answered the screening questions consented to participate in the behavioral intervention (160; 61.5%) (Figure 1). When compared with nonparticipants, participants in the behavioral intervention were younger (25.6% age 85 years vs 40% age 85 years, P=0.028), had a longer average length of hospital stay (7.9 vs 6.1 days, P=0.008), were more likely to be discharged home without clinical services (35.0% vs 23.0%, P=0.041), and were unevenly distributed between the 5 recruitment‐site hospitals, coming primarily from the teaching hospitals (P<0.001) (Table 3). There were no significant differences based on race, sex, dual‐eligible Medicare/Medicaid status, presence of a caregiver, or index diagnosis.

Patient Characteristics by Behavioral Intervention Consent Status (N=260)
Patient CharacteristicsDeclined (n=100)Consented (n=160)P Value
  • NOTE: Abbreviations: CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; MI, myocardial infarction; SD, standard deviation.

  • Significant at P<0.05.

  • Eligible for both Medicare and Medicaid benefits.

  • Discharged home with no planned clinical services, as opposed to being discharged to home health care, hospice, or skilled care. Hospitals 35 are teaching hospitals

Male, n (%)34 (34.0)52 (32.5)0.803
Race, n (%)   
White94 (94.0)151 (94.4)0.691
Black2 (2.0)5 (3.1)
Other4 (4.0)4 (2.5)
Age, n (%), y   
<6517 (17.0)23 (14.4)0.028a
657414 (14.0)42 (26.3)
758429 (29.0)54 (33.8)
8540 (40.0)41 (25.6)
Dual eligible, n (%)b11 (11.0)24 (15.0)0.358
Caregiver present, n (%)17 (17.0)34 (21.3)0.401
Length of stay, mean (SD), d6.1 (4.1)7.9 (4.8)0.008a
Index diagnosis, n (%)   
Acute MI3 (3.0)6 (3.8)0.806
CHF6 (6.0)20 (12.5)0.111
Pneumonia7 (7.0)9 (5.6)0.572
COPD6 (6.0)6 (8.8)0.484
Discharged home without clinical services, n (%)c23 (23.0)56 (35.0)0.041a
Hospital site   
Hospital 115 (15.0)43 (26.9)<0.001a
Hospital 220 (20.0)26 (16.3)
Hospital 315 (15.0)23 (14.4)
Hospital 42 (2.0)48 (30.0)
Hospital 548 (48.0)20 (12.5)

Patients who identified themselves as being unable to control important things in their lives were 65% less likely to consent to the behavioral intervention than those in control (odds ratio [OR]: 0.35, 95% confidence interval [CI]: 0.14‐0.92), and those who did not feel confident about recovering were 83% less likely to consent (OR: 0.17, 95% CI: 0.06‐0.45). Individuals who were confused by any question were 89% less likely to consent (OR: 0.11, 95% CI: 0.05‐0.24). Individuals who answered the medication question incorrectly were 3 times more likely to consent (OR: 3.82, 95% CI: 1.12‐13.03). There were no significant differences in consent for feeling overwhelmed (difficulties piling up) or for having discussed advance care planning with family members or doctors.

We had insufficient power to detect significant differences in posthospital utilization (including hospital readmission, emergency‐department use, and receipt of home health), based on screening‐question responses (data not shown).

DISCUSSION

We find that patients who declined to participate in the behavioral intervention (eligible nonparticipants) differed from participants in 3 important ways: perceived stress, recovery expectation, and health literacy. As hypothesized, patients with higher perceived stress and lower recovery expectation were less likely to consent to the behavioral intervention, even after adjusting for demographic and healthcare‐utilization differences. Contrary to our hypothesis, patients who incorrectly answered the medication question were more likely to consent to the intervention than those who correctly answered.

Characterizing nonparticipants and participants can offer important insight into the limitations of the research that informs clinical guidelines and behavioral interventions. Such characteristics could also indicate how to better engage patients in interventions or other aspects of their care, if associated with lower rates of adherence to recommended health behaviors or treatment plans. For example, self‐efficacy (closely related to perceived stress) and hopelessness regarding clinical outcomes (similar to low recovery expectation in the present study) are associated with nonadherence to medication plans and other care in some populations.[5, 6] Other more extreme stress, like that following a major medical event, has also been associated with a lower rate of adherence to medication regimens and a resulting higher rate of hospital readmission and mortality.[19, 20] People with low health literacy (compared with adequate health literacy) are more likely to report being confused about their medications, requesting help to read medication labels and missing appointments due to trouble reading reminder cards.[9] Identifying these characteristics may assist providers in helping patients address adherence barriers by first accurately identifying the root of patient issues (eg, where the lack of confidence in recovery is rooted in lack of resources or social support), then potentially referring to community resources where possible. For example, some states (including Rhode Island, this study's location) may have Aging and Disability Resource Centers dedicated to linking elderly people with transportation, decision support, and other resources to support quality care.

The association between health literacy and intervention participation remains uncertain. Our question, which assessed interpretation of a prescription label as a health‐literacy proxy, may have given patients insight into their limited health literacy that motivated them to accept the subsequent behavioral intervention. Others have found that lowerhealth literacy patients want their providers to know that they did not understand some health words,[9] though they may be less likely to ask questions, request additional services, or seek new information during a medical encounter.[21] In our study, those who correctly answered the medication‐label question were almost mutually exclusive from those who were otherwise stressed (12% overlap; data not shown). Thus, patients who correctly answer this question may correctly realize that they do not need the support offered by the behavioral intervention and decline to participate. For other patients, perceived stress and poor recovery expectations may be more immediate and important determinants of declination, with patients too stressed to volunteer for another task, even if it involves much‐needed assistance.

The frequency with which patients were confused by the questions merits further comment and may also be driven by stress. Though each question seeks to identify the impact of a specific construct (Table 1), being confused by any question may reflect a more general (or subacute) level of cognitive impairment or generalized low health literacy not limited to the applied numeracy of the medication‐label question. We excluded confused responses to demonstrate more clearly the impact of each individual construct.

The impact of these characteristics may be affected by study design or other characteristics. One of the few studies to examine (via RCT) how methods affect consent found that participation decreased with increasing complexity of the consent process: written consent yielded the lowest participation, limited written consent was higher, and verbal consent was the highest.[10] Other tactics to increase consent include monetary incentives,[22] culturally sensitive materials,[7] telephone reminders,[23] an opt‐out instead of opt‐in approach,[23] and an open design where participants know which treatment they are receiving.[23] We do not know how these tactics relate to the characteristics captured in our screening questions, although other characteristics we measured, such as patients' self‐identified race, have been associated with intervention participation and access to care,[8, 24, 25] and patients who perceive that the benefit of the intervention outweighs expected risks and time requirements are more likely to consent.[4] We intentionally minimized the number of screening questions to encourage participation. The high rate of consent to our screening questions compared with consent to the (more involved) behavioral intervention reveals how sensitive patients are to the perceived invasiveness of an intervention.

We note several limitations. First, overall generalizability is limited due to our small sample size, use of consecutive convenience sampling, and exclusion criteria (eg, patients discharged to long‐term or skilled nursing care). And, these results may not apply to patients who are not hospitalized; hospitalized patients may have different motivations and stressors regarding their involvement in their care. Additionally, although we included as many people with mild cognitive impairment as possible by proxy through caregivers, we excluded some that did not have caregivers, potentially undermining the accuracy of how cognition impacts the choice to accept the behavioral intervention. Because researchers often explicitly exclude individuals based on cognitive impairment, differences between recruited subjects and the population at large may be particularly high among elderly patients, where up to half of the eligible population may be affected by cognitive impairment.[26] Further research into successfully engaging caregivers as a way to reach otherwise‐excluded patients with cognitive impairment can help to mitigate threats to generalizability. Finally, our screening questions are based on validated questions, but we rearranged our question wording, simplified answer choices, and removed them from their original context. Thus, the questions were not validated in our population or when administered in this manner. Although we conducted cognitive testing, further validity and reliability testing are necessary to translate these questions into a general screening tool. The medication‐label question also requires revision; in data collection and analysis, we assume that patients who were unable to answer (needs glasses, too tired, etc.) were masking an inability to respond correctly. Though the use of this excuse is cited in the literature,[17] we cannot be certain that our treatment of it in these screening questions is generalizable. Generalizability also applies to how we group responses. Isolating the most negative response (by grouping the middle answer with the most positive answer) most specifically identifies individuals more likely to need assistance and is therefore clinically pertinent, but this also potentially fails to identify individuals who also need help but do not choose the more extreme answer. Further research to refine the screening questions might also consider the timeframe of the perceived stress questions (past week rather than past month); this timeframe may be specific to the acute medical situation rather than general or unrelated perceived stress. Though this study cannot test this hypothesis, individuals with higher pre‐illness perceived stress may be more interested in addressing the issues that were stressors prior to acute illness, rather than the offered behavioral intervention. Additionally, some of the questions were highly correlated (Q1 and Q2) and indicate a potential for shortening the screening questionnaire.

Still, these findings further the discussion of how to identify and consent hospitalized patients for participation in behavioral interventions, both for research and for routine clinical care. Researchers should specifically consider how to engage individuals who are stressed and are not confident about recovery to improve reach and effectiveness. For example, interventions should prospectively collect data on stress and confidence in recovery and include protocols to support people who are positively identified with these characteristics. These characteristics may also offer insight into improving patient and caregiver engagement; more research is needed into characteristics related to patients' willingness to seek assistance in care. We are not the first to suggest that characteristics not observed in medical charts may impact patient completion or response to behavioral interventions,[27, 28] and considering differences between participants and eligible nonparticipants in clinical care delivery and interventions can strengthen the evidence base for clinical improvements, particularly related to patient self‐management. The implications are useful for both practicing clinicians and larger systems examining the comparativeness of patient interventions and generalizing results from RCTs.

Acknowledgments

The authors thank Phil Clark, PhD, and the SENIOR Project (Study of Exercise and Nutrition in Older Rhode Islanders) research team at the University of Rhode Island for formulating 1 of the screening questions, and Marissa Meucci for her assistance with the cognitive testing and formative research for the screening questions.

Disclosures

The analyses on which this study is based were performed by Healthcentric Advisors under contract HHSM 5002011‐RI10C, titled Utilization and Quality Control Peer Review for the State of Rhode Island, sponsored by the Centers for Medicare and Medicaid Services, US Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors report no conflicts of interest.

References
  1. Boer SPM, Lenzen MJ, Oemrawsingh RM, et al. Evaluating the 'all‐comers' design: a comparison of participants in two 'all‐comers' PCI trials with non‐participants. Eur Heart J. 2011;32(17):21612167.
  2. Stirman SW, DeRubeis RJ, Crits‐Christoph P, Rothman A. Can the randomized controlled trial literature generalize to nonrandomized patients? J Consult Clin Psychol. 2005;73(1):127135.
  3. Voss R, Baier RR, Gardner R, Butterfield K, Lehrman S, Gravenstein S. The care transitions intervention: translating from efficacy to effectiveness. Arch Intern Med. 2011;171(14):12321237.
  4. Verheggen FW, Nieman F, Jonkers R. Determinants of patient participation in clinical studies requiring informed consent: why patients enter a clinical trial. Patient Educ Couns. 1998;35(2):111125.
  5. Kamau TM, Olson VG, Zipp GP, Clark M. Coping self‐efficacy as a predictor of adherence to antiretroviral therapy in men and women living with HIV in Kenya. AIDS Patient Care STDS. 2011;25(9):557561.
  6. Joyner‐Grantham J, Mount DL, McCorkle OD, Simmons DR, Ferrario CM, Cline DM. Self‐reported influences of hopelessness, health literacy, lifestyle action, and patient inertia on blood pressure control in a hypertensive emergency department population. Am J Med Sci. 2009;338(5):368372.
  7. Watson JM, Torgerson DJ. Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6:34.
  8. Cooke CR, Erickson SE, Watkins TR, Matthay MA, Hudson LD, Rubenfeld GD. Age‐, sex‐, and race‐based differences among patients enrolled versus not enrolled in acute lung injury clinical trials. Crit Care Med. 2010;38(6):14501457.
  9. Wolf MS, Williams MV, Parker RM, Parikh NS, Nowlan AW, Baker DW. Patients' shame and attitudes toward discussing the results of literacy screening. J Health Commun. 2007;12(8):721732.
  10. Marco CA. Impact of detailed informed consent on research subjects' participation: a prospective, randomized trial. J Emerg Med. 2008;34(3):269275.
  11. Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav. 1983;24:385396. Available at: http://www.psy.cmu.edu/∼scohen/globalmeas83.pdf. Accessed May 10, 2012.
  12. .Pfizer, Inc. Clear Health Communication: The Newest Vital Sign. Available at: http://www.pfizerhealthliteracy.com/asset/pdf/NVS_Eng/files/nvs_flipbook_english_final.pdf. Accessed May 10, 2012.
  13. White S, Chen J, Atchison R. Relationship of preventive health practices and health literacy: a national study. Am J Health Behav. 2008;32(3):227242.
  14. Kripalani S, Henderson LE, Chiu EY, Robertson R, Kolm P, Jacobson TA. Predictors of medication self‐management skill in a low‐literacy population. J Gen Intern Med. 2006;21:852856.
  15. Mondloch MV, Cole DC, Frank JW. Does how you do depend on how you think you'll do? A systematic review of the evidence for a relation between patients' recovery expectations and health outcomes [published correction appears in CMAJ. 2001;165(10):1303]. CMAJ. 2001;165(2):174179.
  16. Zhang B, Wright AA, Huskamp HA, et al. Health care costs in the last week of life: associations with end‐of‐life conversations. Arch Intern Med. 2009;169(5):480488.
  17. Emmerton LM, Mampallil L, Kairuz T, McKauge LM, Bush RA. Exploring health literacy competencies in community pharmacy. Health Expect. 2010;15(1):1222.
  18. Coleman EA, Parry C, Chalmers S, Min SJ. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):18221828.
  19. Shemesh E, Rudnick A, Kaluski E, et al. A prospective study of posttraumatic stress symptoms and non‐adherence in survivors of a myocardial infarction (MI). Gen Hosp Psychiatry. 2001;23:215222.
  20. Shemesh E, Yehuda R, Milo O, et al. Posttraumatic stress, non‐adherence, and adverse outcomes in survivors of a myocardial infarction. Psychosom Med. 2004;66:521526.
  21. Hironaka LK, Paasche‐Orlow MK. The implications of health literacy on patient‐provider communication. Arch Dis Child. 2008;93:428432.
  22. Mapstone J, Elbourne D, Roberts IG. Strategies to improve recruitment to research studies. Cochrane Database Syst Rev. 2007;2:MR000013.
  23. Treweek S, Pitkethly M, Cook J, et al. Strategies to improve recruitment to randomised controlled trials. Cochrane Database Syst Rev. 2010;4:MR000013.
  24. Wilkes AE, Bordenave K, Vinci L, Peek ME. Addressing diabetes racial and ethnic disparities: lessons learned from quality improvement collaboratives. Diabetes Manag (Lond). 2011;1(6):653660.
  25. King AC, Cao D, Southard CC, Matthews A. Racial differences in eligibility and enrollment in a smoking cessation clinical trial. Health Psychol. 2011;30(1):4048.
  26. Taylor JS, DeMers SM, Vig EK, Borson MD. The disappearing subject: exclusion of people with cognitive impairment from research. J Am Geriatr Soc. 2012;6:413419.
  27. Brazier JE, Dixon S, Ratcliffe J. The role of patient preferences in cost‐effectiveness analysis: a conflict of values? Pharmacoeconomics. 2009;27(9):705712.
  28. Fan VS, Gaziano JM, Lew R, et al. A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations. Ann Intern Med. 2012;156(10):673683.
  29. Flood AB, Lorence DP, Ding J, McPherson K, Black NA. The role of expectations in patients' reports of post‐operative outcomes and improvement following therapy. Med Care. 1993;31:10431056.
  30. Borkan JM, Quirk M. Expectations and outcomes after hip fracture among the elderly. Int J Aging Hum Dev. 1992;34:339350.
  31. Petrie KJ, Weinman J, Sharpe N, Buckley J. Role of patients' view of their illness in predicting return to work and functioning after myocardial infarction: longitudinal study. BMJ. 1996;312:11911194.
References
  1. Boer SPM, Lenzen MJ, Oemrawsingh RM, et al. Evaluating the 'all‐comers' design: a comparison of participants in two 'all‐comers' PCI trials with non‐participants. Eur Heart J. 2011;32(17):21612167.
  2. Stirman SW, DeRubeis RJ, Crits‐Christoph P, Rothman A. Can the randomized controlled trial literature generalize to nonrandomized patients? J Consult Clin Psychol. 2005;73(1):127135.
  3. Voss R, Baier RR, Gardner R, Butterfield K, Lehrman S, Gravenstein S. The care transitions intervention: translating from efficacy to effectiveness. Arch Intern Med. 2011;171(14):12321237.
  4. Verheggen FW, Nieman F, Jonkers R. Determinants of patient participation in clinical studies requiring informed consent: why patients enter a clinical trial. Patient Educ Couns. 1998;35(2):111125.
  5. Kamau TM, Olson VG, Zipp GP, Clark M. Coping self‐efficacy as a predictor of adherence to antiretroviral therapy in men and women living with HIV in Kenya. AIDS Patient Care STDS. 2011;25(9):557561.
  6. Joyner‐Grantham J, Mount DL, McCorkle OD, Simmons DR, Ferrario CM, Cline DM. Self‐reported influences of hopelessness, health literacy, lifestyle action, and patient inertia on blood pressure control in a hypertensive emergency department population. Am J Med Sci. 2009;338(5):368372.
  7. Watson JM, Torgerson DJ. Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6:34.
  8. Cooke CR, Erickson SE, Watkins TR, Matthay MA, Hudson LD, Rubenfeld GD. Age‐, sex‐, and race‐based differences among patients enrolled versus not enrolled in acute lung injury clinical trials. Crit Care Med. 2010;38(6):14501457.
  9. Wolf MS, Williams MV, Parker RM, Parikh NS, Nowlan AW, Baker DW. Patients' shame and attitudes toward discussing the results of literacy screening. J Health Commun. 2007;12(8):721732.
  10. Marco CA. Impact of detailed informed consent on research subjects' participation: a prospective, randomized trial. J Emerg Med. 2008;34(3):269275.
  11. Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav. 1983;24:385396. Available at: http://www.psy.cmu.edu/∼scohen/globalmeas83.pdf. Accessed May 10, 2012.
  12. .Pfizer, Inc. Clear Health Communication: The Newest Vital Sign. Available at: http://www.pfizerhealthliteracy.com/asset/pdf/NVS_Eng/files/nvs_flipbook_english_final.pdf. Accessed May 10, 2012.
  13. White S, Chen J, Atchison R. Relationship of preventive health practices and health literacy: a national study. Am J Health Behav. 2008;32(3):227242.
  14. Kripalani S, Henderson LE, Chiu EY, Robertson R, Kolm P, Jacobson TA. Predictors of medication self‐management skill in a low‐literacy population. J Gen Intern Med. 2006;21:852856.
  15. Mondloch MV, Cole DC, Frank JW. Does how you do depend on how you think you'll do? A systematic review of the evidence for a relation between patients' recovery expectations and health outcomes [published correction appears in CMAJ. 2001;165(10):1303]. CMAJ. 2001;165(2):174179.
  16. Zhang B, Wright AA, Huskamp HA, et al. Health care costs in the last week of life: associations with end‐of‐life conversations. Arch Intern Med. 2009;169(5):480488.
  17. Emmerton LM, Mampallil L, Kairuz T, McKauge LM, Bush RA. Exploring health literacy competencies in community pharmacy. Health Expect. 2010;15(1):1222.
  18. Coleman EA, Parry C, Chalmers S, Min SJ. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):18221828.
  19. Shemesh E, Rudnick A, Kaluski E, et al. A prospective study of posttraumatic stress symptoms and non‐adherence in survivors of a myocardial infarction (MI). Gen Hosp Psychiatry. 2001;23:215222.
  20. Shemesh E, Yehuda R, Milo O, et al. Posttraumatic stress, non‐adherence, and adverse outcomes in survivors of a myocardial infarction. Psychosom Med. 2004;66:521526.
  21. Hironaka LK, Paasche‐Orlow MK. The implications of health literacy on patient‐provider communication. Arch Dis Child. 2008;93:428432.
  22. Mapstone J, Elbourne D, Roberts IG. Strategies to improve recruitment to research studies. Cochrane Database Syst Rev. 2007;2:MR000013.
  23. Treweek S, Pitkethly M, Cook J, et al. Strategies to improve recruitment to randomised controlled trials. Cochrane Database Syst Rev. 2010;4:MR000013.
  24. Wilkes AE, Bordenave K, Vinci L, Peek ME. Addressing diabetes racial and ethnic disparities: lessons learned from quality improvement collaboratives. Diabetes Manag (Lond). 2011;1(6):653660.
  25. King AC, Cao D, Southard CC, Matthews A. Racial differences in eligibility and enrollment in a smoking cessation clinical trial. Health Psychol. 2011;30(1):4048.
  26. Taylor JS, DeMers SM, Vig EK, Borson MD. The disappearing subject: exclusion of people with cognitive impairment from research. J Am Geriatr Soc. 2012;6:413419.
  27. Brazier JE, Dixon S, Ratcliffe J. The role of patient preferences in cost‐effectiveness analysis: a conflict of values? Pharmacoeconomics. 2009;27(9):705712.
  28. Fan VS, Gaziano JM, Lew R, et al. A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations. Ann Intern Med. 2012;156(10):673683.
  29. Flood AB, Lorence DP, Ding J, McPherson K, Black NA. The role of expectations in patients' reports of post‐operative outcomes and improvement following therapy. Med Care. 1993;31:10431056.
  30. Borkan JM, Quirk M. Expectations and outcomes after hip fracture among the elderly. Int J Aging Hum Dev. 1992;34:339350.
  31. Petrie KJ, Weinman J, Sharpe N, Buckley J. Role of patients' view of their illness in predicting return to work and functioning after myocardial infarction: longitudinal study. BMJ. 1996;312:11911194.
Issue
Journal of Hospital Medicine - 8(4)
Issue
Journal of Hospital Medicine - 8(4)
Page Number
208-214
Page Number
208-214
Article Type
Display Headline
Recruiting hospitalized patients for research: How do participants differ from eligible nonparticipants?
Display Headline
Recruiting hospitalized patients for research: How do participants differ from eligible nonparticipants?
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Stefan Gravenstein, MD, MPH, Case Western Reserve University, 11100 Euclid Avenue, Mailstop HAN6095, Cleveland, OH 44106; Telephone: 401–528‐3200; Fax: 401–528‐3210; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Handoff CEX

Article Type
Changed
Mon, 05/22/2017 - 18:12
Display Headline
Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: The handoff CEX

Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.

Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.

The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]

METHODS

Tool Design and Measures

The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.

Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.

Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.

Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.

Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).

Setting and Subjects

We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.

The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.

Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.

Data Collection

Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.

The study was approved by the institutional review boards at both UCM and Yale.

Statistical Analysis

We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).

RESULTS

A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).

Median, Mean, and Range of Handoff CEX Scores in Each Domain, Providers, and Recipients
DomainProvider, N=343Recipient, N=330P Value
Median (IQR)Mean (SD)RangeMedian (IQR)Mean (SD)Range
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

Setting7 (69)7.0 (1.7)297 (69)7.3 (1.6)290.05
Organization7 (68)7.2 (1.5)298 (69)7.4 (1.4)290.07
Communication7 (69)7.2 (1.6)198 (79)7.4 (1.5)290.22
Content7 (68)7.0 (1.6)29    
Judgment8 (68)7.3 (1.4)398 (79)7.5 (1.4)390.06
Professionalism8 (79)7.4 (1.5)298 (79)7.6 (1.4)390.23
Overall7 (68)7.1 (1.5)297 (68)7.4 (1.4)290.02

Handoff Providers

A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).

Handoff Recipients

A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).

Validity Testing

Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).

Handoff CEX Scores by Training Level, Providers Only
DomainMedian (Range)P Value
NP/PA, N=33Subintern or Intern, N=170Resident, N=44Hospitalist, N=95
  • NOTE: Abbreviations: NP/PA: nurse practitioner/physician assistant.

Setting7 (29)7 (39)7 (49)7 (29)0.89
Organization8 (49)7 (29)7 (49)8 (39)0.11
Communication8 (49)7 (29)7 (49)8 (19)0.72
Content7 (39)7 (29)7 (49)7 (29)0.92
Judgment8 (59)7 (39)8 (49)8 (49)0.09
Professionalism8 (49)7 (29)8 (39)8 (49)0.82
Overall7 (39)7 (29)8 (49)7 (29)0.28
Handoff CEX Scores by Peer Versus External Evaluators
 Provider, Median (Range)Recipient, Median (Range)
DomainPeer, N=152Resident, Supervisor, N=43External, N=147P ValuePeer, N=145Resident Supervisor, N=43External, N=142P Value
  • NOTE: Abbreviations: N/A, not applicable.

Setting8 (39)7 (39)7 (29)0.028 (29)7 (39)7 (29)<0.001
Organization8 (39)8 (39)7 (29)0.188 (39)8 (69)7 (29)<0.001
Communication8 (39)8 (39)7 (19)<0.0018 (39)8 (49)7 (29)<0.001
Content8 (39)8 (29)7 (29)<0.001N/AN/AN/AN/A
Judgment8 (49)8 (39)7 (39)<0.0018 (39)8 (49)7 (39)<0.001
Professionalism8 (39)8 (59)7 (29)0.028 (39)8 (69)7 (39)<0.001
Overall8 (39)8 (39)7 (29)0.0018 (29)8 (49)7 (29)<0.001

Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).

Spearman Correlation Coefficients, Provider Evaluations (N=342)
 Spearman Correlation Coefficients
 SettingOrganizationCommunicationContentJudgmentProfessionalism
  • NOTE: All P values <0.0001.

Setting1.0000.400.400.390.390.41
Organization0.401.000.800.710.770.73
Communication0.400.801.000.790.820.77
Content0.390.710.791.000.800.74
Judgment0.390.770.820.801.000.78
Professionalism0.410.730.770.740.781.00
Overall0.550.800.840.830.860.82

We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).

Reliability Testing

Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).

Weighted Kappa Scores
DomainProviderRecipient
External vs Peer, N=144 (95% CI)Resident vs Peer, N=42 (95% CI)External vs Peer, N=134 (95% CI)Resident vs Peer, N=43 (95% CI)
  • NOTE: Abbreviations: CI, confidence interval; N/A, not applicable.

Setting0.39 (0.24, 0.54)0.28 (0.01, 0.56)0.34 (0.20, 0.48)0.48 (0.27, 0.69)
Organization0.43 (0.29, 0.58)0.59 (0.39, 0.80)0.39 (0.22, 0.55)0.03 (0.23, 0.29)
Communication0.34 (0.19, 0.49)0.52 (0.37, 0.68)0.36 (0.22, 0.51)0.02 (0.18, 0.23)
Content0.38 (0.25, 0.51)0.53 (0.27, 0.80)N/A (N/A)N/A (N/A)
Judgment0.36 (0.22, 0.49)0.54 (0.25, 0.83)0.28 (0.15, 0.42)0.12 (0.34, 0.09)
Professionalism0.47 (0.32, 0.63)0.47 (0.23, 0.72)0.35 (0.18, 0.51)0.01 (0.29, 0.26)
Overall0.50 (0.36, 0.64)0.45 (0.24, 0.67)0.31 (0.16, 0.48)0.07 (0.20, 0.34)

DISCUSSION

In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.

It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.

Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.

The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]

In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.

We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.

The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.

There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).

In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.

ACKNOWLEDGMENTS

Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.

Appendix

A

PROVIDER HAND‐OFF CEX TOOL

 

 

RECIPIENT HAND‐OFF CEX TOOL

 

 

Appendix

B

 

Handoff CEX scores by site of evaluation

DomainProviderRecipient
Median (Range)P‐valueMedian (Range)P‐value
 UCYale UCYale 
N=172N=170 N=163N=167 
Setting7 (29)7 (39)0.327 (29)7 (39)0.36
Organization8 (29)7 (39)0.307 (29)8 (59)0.001
Communication7 (19)7 (39)0.677 (29)8 (49)0.03
Content7 (29)7 (29) N/AN/AN/A
Judgment8 (39)7 (39)0.607 (39)8 (49)0.001
Professionalism8 (29)8 (39)0.678 (39)8 (49)0.35
Overall7 (29)7 (39)0.417 (29)8 (49)0.005

 

Appendix

C

Spearman correlation, recipients (N=330)

SpearmanCorrelationCoefficients
 SettingOrganizationCommunicationJudgmentProfessionalism
Setting1.00.460.480.470.40
Organization0.461.000.780.750.75
Communication0.480.781.000.850.77
Judgment0.470.750.851.000.74
Professionalism0.400.750.770.741.00
Overall0.600.770.840.820.77

 

All p values <0.0001

 

Appendix

D

Factor analysis results for provider evaluations

Rotated Factor Pattern (Standardized Regression Coefficients) N=336
 Factor1Factor2
Organization0.640.27
Communication0.790.16
Content0.820.06
Judgment0.860.06
Professionalism0.660.23
Setting0.180.29

 

 

Files
References
  1. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):11731177.
  2. Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
  3. Petersen LA, Brennan TA, O'Neil AC, Cook EF, Lee TH. Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866872.
  4. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186194.
  5. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401407.
  6. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  7. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):610.
  8. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248255.
  9. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  10. Raduma‐Tomas MA, Flin R, Yule S, Williams D. Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128133.
  11. Bump GM, Jovin F, Destefano L, et al. Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105111.
  12. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
  13. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x.
  14. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795799.
  15. Norcini JJ, Blank LL, Arnold GK, Kimball HR. Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):2733.
  16. Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900904.
  17. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  18. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq GY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701710.e4.
  19. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  20. Horwitz LI, Moin T, Wang L, Bradley EH. Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114.
  21. Horwitz LI, Parwani V, Shah NR, et al. Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368378.
  22. Horwitz LI, Schuster KM, Thung SF, et al. An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863871.
  23. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  24. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  25. Arora VM, Johnson JK, Meltzer DO, Humphrey HJ. A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):1114.
  26. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  27. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  28. Johnson JK, Arora VM. Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244245.
  29. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  30. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  31. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167175.
  32. Patterson ES. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):45.
  33. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  34. Ratanawongsa N, Bolen S, Howell EE, Kern DE, Sisson SD, Larriviere D. Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758763.
  35. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673676.
  36. Coiera EW, Jayasuriya RA, Hardy J, Bannan A, Thorpe ME. Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415418.
  37. Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274284.
  38. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  39. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563570.
  40. Li P, Stelfox HT, Ghali WA. A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860867.
  41. Greenstein E, Arora V, Banerjee S, Staisiunas P, Farnan J. Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138.
  42. Thorndike EL. A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25.
Article PDF
Issue
Journal of Hospital Medicine - 8(4)
Page Number
191-200
Sections
Files
Files
Article PDF
Article PDF

Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.

Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.

The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]

METHODS

Tool Design and Measures

The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.

Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.

Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.

Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.

Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).

Setting and Subjects

We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.

The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.

Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.

Data Collection

Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.

The study was approved by the institutional review boards at both UCM and Yale.

Statistical Analysis

We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).

RESULTS

A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).

Median, Mean, and Range of Handoff CEX Scores in Each Domain, Providers, and Recipients
DomainProvider, N=343Recipient, N=330P Value
Median (IQR)Mean (SD)RangeMedian (IQR)Mean (SD)Range
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

Setting7 (69)7.0 (1.7)297 (69)7.3 (1.6)290.05
Organization7 (68)7.2 (1.5)298 (69)7.4 (1.4)290.07
Communication7 (69)7.2 (1.6)198 (79)7.4 (1.5)290.22
Content7 (68)7.0 (1.6)29    
Judgment8 (68)7.3 (1.4)398 (79)7.5 (1.4)390.06
Professionalism8 (79)7.4 (1.5)298 (79)7.6 (1.4)390.23
Overall7 (68)7.1 (1.5)297 (68)7.4 (1.4)290.02

Handoff Providers

A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).

Handoff Recipients

A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).

Validity Testing

Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).

Handoff CEX Scores by Training Level, Providers Only
DomainMedian (Range)P Value
NP/PA, N=33Subintern or Intern, N=170Resident, N=44Hospitalist, N=95
  • NOTE: Abbreviations: NP/PA: nurse practitioner/physician assistant.

Setting7 (29)7 (39)7 (49)7 (29)0.89
Organization8 (49)7 (29)7 (49)8 (39)0.11
Communication8 (49)7 (29)7 (49)8 (19)0.72
Content7 (39)7 (29)7 (49)7 (29)0.92
Judgment8 (59)7 (39)8 (49)8 (49)0.09
Professionalism8 (49)7 (29)8 (39)8 (49)0.82
Overall7 (39)7 (29)8 (49)7 (29)0.28
Handoff CEX Scores by Peer Versus External Evaluators
 Provider, Median (Range)Recipient, Median (Range)
DomainPeer, N=152Resident, Supervisor, N=43External, N=147P ValuePeer, N=145Resident Supervisor, N=43External, N=142P Value
  • NOTE: Abbreviations: N/A, not applicable.

Setting8 (39)7 (39)7 (29)0.028 (29)7 (39)7 (29)<0.001
Organization8 (39)8 (39)7 (29)0.188 (39)8 (69)7 (29)<0.001
Communication8 (39)8 (39)7 (19)<0.0018 (39)8 (49)7 (29)<0.001
Content8 (39)8 (29)7 (29)<0.001N/AN/AN/AN/A
Judgment8 (49)8 (39)7 (39)<0.0018 (39)8 (49)7 (39)<0.001
Professionalism8 (39)8 (59)7 (29)0.028 (39)8 (69)7 (39)<0.001
Overall8 (39)8 (39)7 (29)0.0018 (29)8 (49)7 (29)<0.001

Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).

Spearman Correlation Coefficients, Provider Evaluations (N=342)
 Spearman Correlation Coefficients
 SettingOrganizationCommunicationContentJudgmentProfessionalism
  • NOTE: All P values <0.0001.

Setting1.0000.400.400.390.390.41
Organization0.401.000.800.710.770.73
Communication0.400.801.000.790.820.77
Content0.390.710.791.000.800.74
Judgment0.390.770.820.801.000.78
Professionalism0.410.730.770.740.781.00
Overall0.550.800.840.830.860.82

We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).

Reliability Testing

Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).

Weighted Kappa Scores
DomainProviderRecipient
External vs Peer, N=144 (95% CI)Resident vs Peer, N=42 (95% CI)External vs Peer, N=134 (95% CI)Resident vs Peer, N=43 (95% CI)
  • NOTE: Abbreviations: CI, confidence interval; N/A, not applicable.

Setting0.39 (0.24, 0.54)0.28 (0.01, 0.56)0.34 (0.20, 0.48)0.48 (0.27, 0.69)
Organization0.43 (0.29, 0.58)0.59 (0.39, 0.80)0.39 (0.22, 0.55)0.03 (0.23, 0.29)
Communication0.34 (0.19, 0.49)0.52 (0.37, 0.68)0.36 (0.22, 0.51)0.02 (0.18, 0.23)
Content0.38 (0.25, 0.51)0.53 (0.27, 0.80)N/A (N/A)N/A (N/A)
Judgment0.36 (0.22, 0.49)0.54 (0.25, 0.83)0.28 (0.15, 0.42)0.12 (0.34, 0.09)
Professionalism0.47 (0.32, 0.63)0.47 (0.23, 0.72)0.35 (0.18, 0.51)0.01 (0.29, 0.26)
Overall0.50 (0.36, 0.64)0.45 (0.24, 0.67)0.31 (0.16, 0.48)0.07 (0.20, 0.34)

DISCUSSION

In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.

It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.

Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.

The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]

In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.

We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.

The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.

There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).

In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.

ACKNOWLEDGMENTS

Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.

Appendix

A

PROVIDER HAND‐OFF CEX TOOL

 

 

RECIPIENT HAND‐OFF CEX TOOL

 

 

Appendix

B

 

Handoff CEX scores by site of evaluation

DomainProviderRecipient
Median (Range)P‐valueMedian (Range)P‐value
 UCYale UCYale 
N=172N=170 N=163N=167 
Setting7 (29)7 (39)0.327 (29)7 (39)0.36
Organization8 (29)7 (39)0.307 (29)8 (59)0.001
Communication7 (19)7 (39)0.677 (29)8 (49)0.03
Content7 (29)7 (29) N/AN/AN/A
Judgment8 (39)7 (39)0.607 (39)8 (49)0.001
Professionalism8 (29)8 (39)0.678 (39)8 (49)0.35
Overall7 (29)7 (39)0.417 (29)8 (49)0.005

 

Appendix

C

Spearman correlation, recipients (N=330)

SpearmanCorrelationCoefficients
 SettingOrganizationCommunicationJudgmentProfessionalism
Setting1.00.460.480.470.40
Organization0.461.000.780.750.75
Communication0.480.781.000.850.77
Judgment0.470.750.851.000.74
Professionalism0.400.750.770.741.00
Overall0.600.770.840.820.77

 

All p values <0.0001

 

Appendix

D

Factor analysis results for provider evaluations

Rotated Factor Pattern (Standardized Regression Coefficients) N=336
 Factor1Factor2
Organization0.640.27
Communication0.790.16
Content0.820.06
Judgment0.860.06
Professionalism0.660.23
Setting0.180.29

 

 

Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.

Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.

The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]

METHODS

Tool Design and Measures

The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.

Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.

Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.

Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.

Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).

Setting and Subjects

We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.

The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.

Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.

Data Collection

Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.

The study was approved by the institutional review boards at both UCM and Yale.

Statistical Analysis

We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).

RESULTS

A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).

Median, Mean, and Range of Handoff CEX Scores in Each Domain, Providers, and Recipients
DomainProvider, N=343Recipient, N=330P Value
Median (IQR)Mean (SD)RangeMedian (IQR)Mean (SD)Range
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation.

Setting7 (69)7.0 (1.7)297 (69)7.3 (1.6)290.05
Organization7 (68)7.2 (1.5)298 (69)7.4 (1.4)290.07
Communication7 (69)7.2 (1.6)198 (79)7.4 (1.5)290.22
Content7 (68)7.0 (1.6)29    
Judgment8 (68)7.3 (1.4)398 (79)7.5 (1.4)390.06
Professionalism8 (79)7.4 (1.5)298 (79)7.6 (1.4)390.23
Overall7 (68)7.1 (1.5)297 (68)7.4 (1.4)290.02

Handoff Providers

A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).

Handoff Recipients

A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).

Validity Testing

Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).

Handoff CEX Scores by Training Level, Providers Only
DomainMedian (Range)P Value
NP/PA, N=33Subintern or Intern, N=170Resident, N=44Hospitalist, N=95
  • NOTE: Abbreviations: NP/PA: nurse practitioner/physician assistant.

Setting7 (29)7 (39)7 (49)7 (29)0.89
Organization8 (49)7 (29)7 (49)8 (39)0.11
Communication8 (49)7 (29)7 (49)8 (19)0.72
Content7 (39)7 (29)7 (49)7 (29)0.92
Judgment8 (59)7 (39)8 (49)8 (49)0.09
Professionalism8 (49)7 (29)8 (39)8 (49)0.82
Overall7 (39)7 (29)8 (49)7 (29)0.28
Handoff CEX Scores by Peer Versus External Evaluators
 Provider, Median (Range)Recipient, Median (Range)
DomainPeer, N=152Resident, Supervisor, N=43External, N=147P ValuePeer, N=145Resident Supervisor, N=43External, N=142P Value
  • NOTE: Abbreviations: N/A, not applicable.

Setting8 (39)7 (39)7 (29)0.028 (29)7 (39)7 (29)<0.001
Organization8 (39)8 (39)7 (29)0.188 (39)8 (69)7 (29)<0.001
Communication8 (39)8 (39)7 (19)<0.0018 (39)8 (49)7 (29)<0.001
Content8 (39)8 (29)7 (29)<0.001N/AN/AN/AN/A
Judgment8 (49)8 (39)7 (39)<0.0018 (39)8 (49)7 (39)<0.001
Professionalism8 (39)8 (59)7 (29)0.028 (39)8 (69)7 (39)<0.001
Overall8 (39)8 (39)7 (29)0.0018 (29)8 (49)7 (29)<0.001

Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).

Spearman Correlation Coefficients, Provider Evaluations (N=342)
 Spearman Correlation Coefficients
 SettingOrganizationCommunicationContentJudgmentProfessionalism
  • NOTE: All P values <0.0001.

Setting1.0000.400.400.390.390.41
Organization0.401.000.800.710.770.73
Communication0.400.801.000.790.820.77
Content0.390.710.791.000.800.74
Judgment0.390.770.820.801.000.78
Professionalism0.410.730.770.740.781.00
Overall0.550.800.840.830.860.82

We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).

Reliability Testing

Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).

Weighted Kappa Scores
DomainProviderRecipient
External vs Peer, N=144 (95% CI)Resident vs Peer, N=42 (95% CI)External vs Peer, N=134 (95% CI)Resident vs Peer, N=43 (95% CI)
  • NOTE: Abbreviations: CI, confidence interval; N/A, not applicable.

Setting0.39 (0.24, 0.54)0.28 (0.01, 0.56)0.34 (0.20, 0.48)0.48 (0.27, 0.69)
Organization0.43 (0.29, 0.58)0.59 (0.39, 0.80)0.39 (0.22, 0.55)0.03 (0.23, 0.29)
Communication0.34 (0.19, 0.49)0.52 (0.37, 0.68)0.36 (0.22, 0.51)0.02 (0.18, 0.23)
Content0.38 (0.25, 0.51)0.53 (0.27, 0.80)N/A (N/A)N/A (N/A)
Judgment0.36 (0.22, 0.49)0.54 (0.25, 0.83)0.28 (0.15, 0.42)0.12 (0.34, 0.09)
Professionalism0.47 (0.32, 0.63)0.47 (0.23, 0.72)0.35 (0.18, 0.51)0.01 (0.29, 0.26)
Overall0.50 (0.36, 0.64)0.45 (0.24, 0.67)0.31 (0.16, 0.48)0.07 (0.20, 0.34)

DISCUSSION

In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.

It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.

Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.

The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]

In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.

We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.

The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.

There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).

In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.

ACKNOWLEDGMENTS

Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.

Appendix

A

PROVIDER HAND‐OFF CEX TOOL

 

 

RECIPIENT HAND‐OFF CEX TOOL

 

 

Appendix

B

 

Handoff CEX scores by site of evaluation

DomainProviderRecipient
Median (Range)P‐valueMedian (Range)P‐value
 UCYale UCYale 
N=172N=170 N=163N=167 
Setting7 (29)7 (39)0.327 (29)7 (39)0.36
Organization8 (29)7 (39)0.307 (29)8 (59)0.001
Communication7 (19)7 (39)0.677 (29)8 (49)0.03
Content7 (29)7 (29) N/AN/AN/A
Judgment8 (39)7 (39)0.607 (39)8 (49)0.001
Professionalism8 (29)8 (39)0.678 (39)8 (49)0.35
Overall7 (29)7 (39)0.417 (29)8 (49)0.005

 

Appendix

C

Spearman correlation, recipients (N=330)

SpearmanCorrelationCoefficients
 SettingOrganizationCommunicationJudgmentProfessionalism
Setting1.00.460.480.470.40
Organization0.461.000.780.750.75
Communication0.480.781.000.850.77
Judgment0.470.750.851.000.74
Professionalism0.400.750.770.741.00
Overall0.600.770.840.820.77

 

All p values <0.0001

 

Appendix

D

Factor analysis results for provider evaluations

Rotated Factor Pattern (Standardized Regression Coefficients) N=336
 Factor1Factor2
Organization0.640.27
Communication0.790.16
Content0.820.06
Judgment0.860.06
Professionalism0.660.23
Setting0.180.29

 

 

References
  1. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):11731177.
  2. Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
  3. Petersen LA, Brennan TA, O'Neil AC, Cook EF, Lee TH. Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866872.
  4. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186194.
  5. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401407.
  6. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  7. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):610.
  8. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248255.
  9. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  10. Raduma‐Tomas MA, Flin R, Yule S, Williams D. Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128133.
  11. Bump GM, Jovin F, Destefano L, et al. Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105111.
  12. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
  13. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x.
  14. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795799.
  15. Norcini JJ, Blank LL, Arnold GK, Kimball HR. Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):2733.
  16. Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900904.
  17. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  18. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq GY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701710.e4.
  19. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  20. Horwitz LI, Moin T, Wang L, Bradley EH. Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114.
  21. Horwitz LI, Parwani V, Shah NR, et al. Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368378.
  22. Horwitz LI, Schuster KM, Thung SF, et al. An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863871.
  23. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  24. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  25. Arora VM, Johnson JK, Meltzer DO, Humphrey HJ. A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):1114.
  26. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  27. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  28. Johnson JK, Arora VM. Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244245.
  29. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  30. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  31. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167175.
  32. Patterson ES. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):45.
  33. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  34. Ratanawongsa N, Bolen S, Howell EE, Kern DE, Sisson SD, Larriviere D. Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758763.
  35. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673676.
  36. Coiera EW, Jayasuriya RA, Hardy J, Bannan A, Thorpe ME. Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415418.
  37. Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274284.
  38. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  39. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563570.
  40. Li P, Stelfox HT, Ghali WA. A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860867.
  41. Greenstein E, Arora V, Banerjee S, Staisiunas P, Farnan J. Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138.
  42. Thorndike EL. A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25.
References
  1. Horwitz LI, Krumholz HM, Green ML, Huot SJ. Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):11731177.
  2. Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
  3. Petersen LA, Brennan TA, O'Neil AC, Cook EF, Lee TH. Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866872.
  4. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186194.
  5. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401407.
  6. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  7. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):610.
  8. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248255.
  9. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  10. Raduma‐Tomas MA, Flin R, Yule S, Williams D. Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128133.
  11. Bump GM, Jovin F, Destefano L, et al. Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105111.
  12. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
  13. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x.
  14. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795799.
  15. Norcini JJ, Blank LL, Arnold GK, Kimball HR. Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):2733.
  16. Durning SJ, Cation LJ, Markert RJ, Pangaro LN. Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900904.
  17. Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826830.
  18. Horwitz LI, Meredith T, Schuur JD, Shah NR, Kulkarni RG, Jenq GY. Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701710.e4.
  19. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  20. Horwitz LI, Moin T, Wang L, Bradley EH. Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114.
  21. Horwitz LI, Parwani V, Shah NR, et al. Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368378.
  22. Horwitz LI, Schuster KM, Thung SF, et al. An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863871.
  23. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646655.
  24. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  25. Arora VM, Johnson JK, Meltzer DO, Humphrey HJ. A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):1114.
  26. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  27. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  28. Johnson JK, Arora VM. Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244245.
  29. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  30. Salerno SM, Arnett MV, Domanski JP. Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121126.
  31. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167175.
  32. Patterson ES. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):45.
  33. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  34. Ratanawongsa N, Bolen S, Howell EE, Kern DE, Sisson SD, Larriviere D. Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758763.
  35. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673676.
  36. Coiera EW, Jayasuriya RA, Hardy J, Bannan A, Thorpe ME. Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415418.
  37. Ong MS, Coiera E. A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274284.
  38. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  39. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563570.
  40. Li P, Stelfox HT, Ghali WA. A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860867.
  41. Greenstein E, Arora V, Banerjee S, Staisiunas P, Farnan J. Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138.
  42. Thorndike EL. A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25.
Issue
Journal of Hospital Medicine - 8(4)
Issue
Journal of Hospital Medicine - 8(4)
Page Number
191-200
Page Number
191-200
Article Type
Display Headline
Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: The handoff CEX
Display Headline
Development of a handoff evaluation tool for shift‐to‐shift physician handoffs: The handoff CEX
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Leora I. Horwitz, MD, Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, P.O. Box 208093, New Haven, CT 06520-8093; Telephone: 203-688‐5678; Fax: 203–737‐3306; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

A new way to treat ear infections

Article Type
Changed
Fri, 01/18/2019 - 12:35
Display Headline
A new way to treat ear infections

A new way to treat ear infections in children called "individualized care" is described in the May 2013 issue of Pediatric Infectious Diseases Journal. It explains how to reduce the frequency of repeated ear infections nearly 500% and how to reduce the need for ear tube surgery by 600% in your practice.

Dr. Janet Casey at Legacy Pediatrics in Rochester, N.Y.; Anthony Almudevar, Ph.D., of the University of Rochester; and I conducted the prospective, longitudinal multiyear study with the support of the National Institutes of Health’s National Institute for Deafness and Communication Disorders and the Thrasher Research Fund (Pediatr. Infect. Dis. J. 2013 Jan. 21 [Epub ahead of print]).

Dr. Michael E. Pichichero

The study compared three groups: children who were in the Legacy Pediatrics practice and received individualized care; control children in the Legacy practice who did not participate because their parents declined participation (they did not want venipunctures or ear taps); and community controls drawn from a different pediatric practice in the suburbs of Rochester that used the diagnostic criteria of the American Academy of Pediatrics and treated all children empirically with high-dose amoxicillin as endorsed by the former and new AAP treatment guidelines (Pediatrics 2013;131:e964-99).

The new treatment paradigm of individualized care included a tympanocentesis procedure, also called an ear tap, to determine precisely the bacteria causing the ear infection. Treatment was started with high-dose amoxicillin/clavulanate. The sample of fluid then was taken to my laboratory at the Rochester General Hospital Research Institute, where the bacteria isolated were tested against a panel of antibiotics to determine whether to continue with amoxicillin/clavulanate or switch to a more effective antibiotic for the child based on culture susceptibility. By doing the ear tap and antibiotic testing, the frequency of repeated ear infections was reduced by 250%, compared with the Legacy practice controls who did not participate, and by 460%, compared with the community controls.

The most common reason for children to receive ear tubes is repeated ear infections, so when the frequency of ear infections was reduced so too was the frequency of ear tube surgery. The new treatment approach resulted in 260% fewer ear tube surgeries in the individualized care group, compared with the Legacy Pediatrics controls, and 620% fewer surgeries than the community controls.

Allowing the child to receive an ear tap was a requirement for the study. Dr. Casey and I found a way to do the procedure painlessly by instilling 8% Novocain in the ear canal as drops to anesthetize the tympanic membrane. After 15 minutes there was no pain when the tap was done. We used a papoose to hold the child still.

The ear-tap procedure not only allowed individualized care with the astonishing results reported, it also allowed more rapid healing of the ear since removal of the pus and bacteria from behind the ear allowed the antibiotics to work better and the immune system to clear the infection more effectively.

The article discusses reasons for the remarkable difference in results with the individualized care approach. First, Dr. Casey and I have undergone special training from ear, nose, and throat (ENT) doctors in the diagnosis of ear infections.

In earlier studies, a group of experts in otitis media diagnosis joined together in a continuing medical education course sponsored by Outcomes Management Education Workshops to use video exams to test whether pediatricians, family physicians, and urgent care physicians knew how to correctly distinguish true acute otitis media (AOM) from otitis media with effusion (OME) and variations of normal in the tympanic membrane exam. We found that all three specialty groups and residents in training in all three specialties and nurse practitioners and physician assistants overdiagnosed AOM about half the time.

Second, the selection of antibiotic proved to be key. Dr. Casey and I have the only otitis media research center in the United States providing tympanocentesis data at the current time. We have found that amoxicillin kills the bacteria causing AOM infections in children in the Rochester area only about 30% of the time. By knowing the bacteria, an evidence-based antibiotic can be chosen.

I expect that readers of this column will believe they diagnose AOM correctly nearly all the time and that it is the other physician who overdiagnoses. I expect that readers will be reluctant to not adhere to the AAP guideline recommendation of using amoxicillin as the treatment of first choice. Most of all, I expect readers to be reluctant to undertake training on how to do the ear tap procedure. Change is always resisted by the majority, and only with time does it occur if the evidence is strong and there is growing adoption.

 

 

Nevertheless, I encourage all to find an opportunity to attend a CME course on AOM diagnosis and I hope that resident training programs will incorporate more effective teaching on AOM diagnosis. I recommend high-dose amoxicillin/clavulanate as the treatment of choice for AOM; if it is not tolerated, then one of the preferred cephalosporins endorsed by the AAP guideline should be chosen.

I recommend that resident training programs include tympanocentesis as part of the curriculum. Why are residents taught how to do a spinal tap, arterial artery puncture, and lung tap but not an ear tap? I also recommend that practicing pediatricians gain the skill to perform tympanocentesis as well. I recognize that some just won’t have the hand/eye coordination or steady hand needed, so it’s not for everyone. However, especially in group practices, a few trained providers could become an internal referral resource for getting the procedure done.

Arguments about malpractice are a smokescreen. The risks of tympanocentesis are no greater than venipuncture in trained and skilled hands. It is included as a standard procedure for pediatricians in our state without any additional malpractice insurance costs. And Dr. Casey and I have effectively managed to get the procedure done when a patient needs it without blowing our schedules off the map and raising the ire of patients and staff. It just takes a commitment.

It would be convenient to refer to an ENT doctor for a tympanocentesis, but most ENT doctors have not been trained to do the procedure while the child is awake and prefer to have the child asleep. Also, try to get a child in for an appointment with an ENT with no notice on the same day! Moreover, ENT doctors have been trained that if an ear tap is needed then it is advisable to go ahead and put in an ear tube.

Because of the success of this research, our center received a renewal of support from NIH in 2012 to continue the study through 2017. Several pediatric practices in Rochester are part of the research – Long Pond Pediatrics, Westfall Pediatrics, Sunrise Pediatrics, Lewis Pediatrics, and Pathway Pediatrics – as well as Dr. Margo Benoit of the department of otolaryngology at the University of Rochester and Dr. Frank Salamone and Dr. Kevin Kozara of the Rochester Otolaryngology Group, which is affiliated with Rochester General Hospital.

Dr. Pichichero, a specialist in pediatric infectious diseases, is director of the Rochester (N.Y.) General Hospital Research Institute. He is also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant financial conflicts of interest to disclose.

Author and Disclosure Information

Publications
Legacy Keywords
ear infection, individualized care, Pediatric Infectious Diseases Journal, ear tube surgery, venipunctures, ear taps, amoxicillin, tympanocentesis procedure, clavulanate, Dr. Michael Pichichero
Sections
Author and Disclosure Information

Author and Disclosure Information

A new way to treat ear infections in children called "individualized care" is described in the May 2013 issue of Pediatric Infectious Diseases Journal. It explains how to reduce the frequency of repeated ear infections nearly 500% and how to reduce the need for ear tube surgery by 600% in your practice.

Dr. Janet Casey at Legacy Pediatrics in Rochester, N.Y.; Anthony Almudevar, Ph.D., of the University of Rochester; and I conducted the prospective, longitudinal multiyear study with the support of the National Institutes of Health’s National Institute for Deafness and Communication Disorders and the Thrasher Research Fund (Pediatr. Infect. Dis. J. 2013 Jan. 21 [Epub ahead of print]).

Dr. Michael E. Pichichero

The study compared three groups: children who were in the Legacy Pediatrics practice and received individualized care; control children in the Legacy practice who did not participate because their parents declined participation (they did not want venipunctures or ear taps); and community controls drawn from a different pediatric practice in the suburbs of Rochester that used the diagnostic criteria of the American Academy of Pediatrics and treated all children empirically with high-dose amoxicillin as endorsed by the former and new AAP treatment guidelines (Pediatrics 2013;131:e964-99).

The new treatment paradigm of individualized care included a tympanocentesis procedure, also called an ear tap, to determine precisely the bacteria causing the ear infection. Treatment was started with high-dose amoxicillin/clavulanate. The sample of fluid then was taken to my laboratory at the Rochester General Hospital Research Institute, where the bacteria isolated were tested against a panel of antibiotics to determine whether to continue with amoxicillin/clavulanate or switch to a more effective antibiotic for the child based on culture susceptibility. By doing the ear tap and antibiotic testing, the frequency of repeated ear infections was reduced by 250%, compared with the Legacy practice controls who did not participate, and by 460%, compared with the community controls.

The most common reason for children to receive ear tubes is repeated ear infections, so when the frequency of ear infections was reduced so too was the frequency of ear tube surgery. The new treatment approach resulted in 260% fewer ear tube surgeries in the individualized care group, compared with the Legacy Pediatrics controls, and 620% fewer surgeries than the community controls.

Allowing the child to receive an ear tap was a requirement for the study. Dr. Casey and I found a way to do the procedure painlessly by instilling 8% Novocain in the ear canal as drops to anesthetize the tympanic membrane. After 15 minutes there was no pain when the tap was done. We used a papoose to hold the child still.

The ear-tap procedure not only allowed individualized care with the astonishing results reported, it also allowed more rapid healing of the ear since removal of the pus and bacteria from behind the ear allowed the antibiotics to work better and the immune system to clear the infection more effectively.

The article discusses reasons for the remarkable difference in results with the individualized care approach. First, Dr. Casey and I have undergone special training from ear, nose, and throat (ENT) doctors in the diagnosis of ear infections.

In earlier studies, a group of experts in otitis media diagnosis joined together in a continuing medical education course sponsored by Outcomes Management Education Workshops to use video exams to test whether pediatricians, family physicians, and urgent care physicians knew how to correctly distinguish true acute otitis media (AOM) from otitis media with effusion (OME) and variations of normal in the tympanic membrane exam. We found that all three specialty groups and residents in training in all three specialties and nurse practitioners and physician assistants overdiagnosed AOM about half the time.

Second, the selection of antibiotic proved to be key. Dr. Casey and I have the only otitis media research center in the United States providing tympanocentesis data at the current time. We have found that amoxicillin kills the bacteria causing AOM infections in children in the Rochester area only about 30% of the time. By knowing the bacteria, an evidence-based antibiotic can be chosen.

I expect that readers of this column will believe they diagnose AOM correctly nearly all the time and that it is the other physician who overdiagnoses. I expect that readers will be reluctant to not adhere to the AAP guideline recommendation of using amoxicillin as the treatment of first choice. Most of all, I expect readers to be reluctant to undertake training on how to do the ear tap procedure. Change is always resisted by the majority, and only with time does it occur if the evidence is strong and there is growing adoption.

 

 

Nevertheless, I encourage all to find an opportunity to attend a CME course on AOM diagnosis and I hope that resident training programs will incorporate more effective teaching on AOM diagnosis. I recommend high-dose amoxicillin/clavulanate as the treatment of choice for AOM; if it is not tolerated, then one of the preferred cephalosporins endorsed by the AAP guideline should be chosen.

I recommend that resident training programs include tympanocentesis as part of the curriculum. Why are residents taught how to do a spinal tap, arterial artery puncture, and lung tap but not an ear tap? I also recommend that practicing pediatricians gain the skill to perform tympanocentesis as well. I recognize that some just won’t have the hand/eye coordination or steady hand needed, so it’s not for everyone. However, especially in group practices, a few trained providers could become an internal referral resource for getting the procedure done.

Arguments about malpractice are a smokescreen. The risks of tympanocentesis are no greater than venipuncture in trained and skilled hands. It is included as a standard procedure for pediatricians in our state without any additional malpractice insurance costs. And Dr. Casey and I have effectively managed to get the procedure done when a patient needs it without blowing our schedules off the map and raising the ire of patients and staff. It just takes a commitment.

It would be convenient to refer to an ENT doctor for a tympanocentesis, but most ENT doctors have not been trained to do the procedure while the child is awake and prefer to have the child asleep. Also, try to get a child in for an appointment with an ENT with no notice on the same day! Moreover, ENT doctors have been trained that if an ear tap is needed then it is advisable to go ahead and put in an ear tube.

Because of the success of this research, our center received a renewal of support from NIH in 2012 to continue the study through 2017. Several pediatric practices in Rochester are part of the research – Long Pond Pediatrics, Westfall Pediatrics, Sunrise Pediatrics, Lewis Pediatrics, and Pathway Pediatrics – as well as Dr. Margo Benoit of the department of otolaryngology at the University of Rochester and Dr. Frank Salamone and Dr. Kevin Kozara of the Rochester Otolaryngology Group, which is affiliated with Rochester General Hospital.

Dr. Pichichero, a specialist in pediatric infectious diseases, is director of the Rochester (N.Y.) General Hospital Research Institute. He is also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant financial conflicts of interest to disclose.

A new way to treat ear infections in children called "individualized care" is described in the May 2013 issue of Pediatric Infectious Diseases Journal. It explains how to reduce the frequency of repeated ear infections nearly 500% and how to reduce the need for ear tube surgery by 600% in your practice.

Dr. Janet Casey at Legacy Pediatrics in Rochester, N.Y.; Anthony Almudevar, Ph.D., of the University of Rochester; and I conducted the prospective, longitudinal multiyear study with the support of the National Institutes of Health’s National Institute for Deafness and Communication Disorders and the Thrasher Research Fund (Pediatr. Infect. Dis. J. 2013 Jan. 21 [Epub ahead of print]).

Dr. Michael E. Pichichero

The study compared three groups: children who were in the Legacy Pediatrics practice and received individualized care; control children in the Legacy practice who did not participate because their parents declined participation (they did not want venipunctures or ear taps); and community controls drawn from a different pediatric practice in the suburbs of Rochester that used the diagnostic criteria of the American Academy of Pediatrics and treated all children empirically with high-dose amoxicillin as endorsed by the former and new AAP treatment guidelines (Pediatrics 2013;131:e964-99).

The new treatment paradigm of individualized care included a tympanocentesis procedure, also called an ear tap, to determine precisely the bacteria causing the ear infection. Treatment was started with high-dose amoxicillin/clavulanate. The sample of fluid then was taken to my laboratory at the Rochester General Hospital Research Institute, where the bacteria isolated were tested against a panel of antibiotics to determine whether to continue with amoxicillin/clavulanate or switch to a more effective antibiotic for the child based on culture susceptibility. By doing the ear tap and antibiotic testing, the frequency of repeated ear infections was reduced by 250%, compared with the Legacy practice controls who did not participate, and by 460%, compared with the community controls.

The most common reason for children to receive ear tubes is repeated ear infections, so when the frequency of ear infections was reduced so too was the frequency of ear tube surgery. The new treatment approach resulted in 260% fewer ear tube surgeries in the individualized care group, compared with the Legacy Pediatrics controls, and 620% fewer surgeries than the community controls.

Allowing the child to receive an ear tap was a requirement for the study. Dr. Casey and I found a way to do the procedure painlessly by instilling 8% Novocain in the ear canal as drops to anesthetize the tympanic membrane. After 15 minutes there was no pain when the tap was done. We used a papoose to hold the child still.

The ear-tap procedure not only allowed individualized care with the astonishing results reported, it also allowed more rapid healing of the ear since removal of the pus and bacteria from behind the ear allowed the antibiotics to work better and the immune system to clear the infection more effectively.

The article discusses reasons for the remarkable difference in results with the individualized care approach. First, Dr. Casey and I have undergone special training from ear, nose, and throat (ENT) doctors in the diagnosis of ear infections.

In earlier studies, a group of experts in otitis media diagnosis joined together in a continuing medical education course sponsored by Outcomes Management Education Workshops to use video exams to test whether pediatricians, family physicians, and urgent care physicians knew how to correctly distinguish true acute otitis media (AOM) from otitis media with effusion (OME) and variations of normal in the tympanic membrane exam. We found that all three specialty groups and residents in training in all three specialties and nurse practitioners and physician assistants overdiagnosed AOM about half the time.

Second, the selection of antibiotic proved to be key. Dr. Casey and I have the only otitis media research center in the United States providing tympanocentesis data at the current time. We have found that amoxicillin kills the bacteria causing AOM infections in children in the Rochester area only about 30% of the time. By knowing the bacteria, an evidence-based antibiotic can be chosen.

I expect that readers of this column will believe they diagnose AOM correctly nearly all the time and that it is the other physician who overdiagnoses. I expect that readers will be reluctant to not adhere to the AAP guideline recommendation of using amoxicillin as the treatment of first choice. Most of all, I expect readers to be reluctant to undertake training on how to do the ear tap procedure. Change is always resisted by the majority, and only with time does it occur if the evidence is strong and there is growing adoption.

 

 

Nevertheless, I encourage all to find an opportunity to attend a CME course on AOM diagnosis and I hope that resident training programs will incorporate more effective teaching on AOM diagnosis. I recommend high-dose amoxicillin/clavulanate as the treatment of choice for AOM; if it is not tolerated, then one of the preferred cephalosporins endorsed by the AAP guideline should be chosen.

I recommend that resident training programs include tympanocentesis as part of the curriculum. Why are residents taught how to do a spinal tap, arterial artery puncture, and lung tap but not an ear tap? I also recommend that practicing pediatricians gain the skill to perform tympanocentesis as well. I recognize that some just won’t have the hand/eye coordination or steady hand needed, so it’s not for everyone. However, especially in group practices, a few trained providers could become an internal referral resource for getting the procedure done.

Arguments about malpractice are a smokescreen. The risks of tympanocentesis are no greater than venipuncture in trained and skilled hands. It is included as a standard procedure for pediatricians in our state without any additional malpractice insurance costs. And Dr. Casey and I have effectively managed to get the procedure done when a patient needs it without blowing our schedules off the map and raising the ire of patients and staff. It just takes a commitment.

It would be convenient to refer to an ENT doctor for a tympanocentesis, but most ENT doctors have not been trained to do the procedure while the child is awake and prefer to have the child asleep. Also, try to get a child in for an appointment with an ENT with no notice on the same day! Moreover, ENT doctors have been trained that if an ear tap is needed then it is advisable to go ahead and put in an ear tube.

Because of the success of this research, our center received a renewal of support from NIH in 2012 to continue the study through 2017. Several pediatric practices in Rochester are part of the research – Long Pond Pediatrics, Westfall Pediatrics, Sunrise Pediatrics, Lewis Pediatrics, and Pathway Pediatrics – as well as Dr. Margo Benoit of the department of otolaryngology at the University of Rochester and Dr. Frank Salamone and Dr. Kevin Kozara of the Rochester Otolaryngology Group, which is affiliated with Rochester General Hospital.

Dr. Pichichero, a specialist in pediatric infectious diseases, is director of the Rochester (N.Y.) General Hospital Research Institute. He is also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant financial conflicts of interest to disclose.

Publications
Publications
Article Type
Display Headline
A new way to treat ear infections
Display Headline
A new way to treat ear infections
Legacy Keywords
ear infection, individualized care, Pediatric Infectious Diseases Journal, ear tube surgery, venipunctures, ear taps, amoxicillin, tympanocentesis procedure, clavulanate, Dr. Michael Pichichero
Legacy Keywords
ear infection, individualized care, Pediatric Infectious Diseases Journal, ear tube surgery, venipunctures, ear taps, amoxicillin, tympanocentesis procedure, clavulanate, Dr. Michael Pichichero
Sections
Article Source

PURLs Copyright

Inside the Article

Hospitalist Communication Training

Article Type
Changed
Sun, 05/21/2017 - 18:16
Display Headline
Impact of hospitalist communication‐skills training on patient‐satisfaction scores

Hospital settings present unique challenges to patient‐clinician communication and collaboration. Patients frequently have multiple, active conditions. Interprofessional teams are large and care for multiple patients at the same time, and team membership is dynamic and dispersed. Moreover, physicians spend relatively little time with patients[1, 2] and seldom receive training in communication skills after medical school.

The Agency for Healthcare Research and Quality (AHRQ) has developed the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey to assess hospitalized patients' experiences with care.[3, 4, 5] Results are publicly reported on the US Department of Health and Human Services Hospital Compare Web site[6] and now affect hospital payment through the Center for Medicare and Medicaid Services Hospital Value‐Based Purchasing Program.[7]

Despite this increased transparency and accountability for performance related to the patient experience, little research has been conducted on how hospitals or clinicians might improve performance. Although interventions to enhance physician communication skills have shown improvements in observed behaviors, few studies have assessed benefit from the patient's perspective and few interventions have been integrated into practice.[8] We sought to assess the impact of a communication‐skills training program, based on a common framework used by hospitals, on patient satisfaction with doctor communication and overall hospital care.

METHODS

Setting and Study Design

The study was conducted at Northwestern Memorial Hospital (NMH), an 897‐bed tertiary‐care teaching hospital in Chicago, IL, and was approved by the institutional review board of Northwestern University. This study was a preintervention vs postintervention comparison of patient‐satisfaction scores. The intervention was a communication‐skills training program for all NMH hospitalists. We compared patient‐satisfaction survey data for patients admitted to the nonteaching hospitalist service during the 26 weeks prior to the intervention with data for patients admitted to the same service during the 22 weeks afterward. Hospitalists on this service worked 7 consecutive days, usually followed by 7 days free from clinical duty. Hospitalists cared for approximately 1014 patients per day without the assistance of resident physicians or midlevel providers (ie, physician assistants or nurse practitioners). Nighttime patient care was provided by in‐house hospitalists (ie, nocturnists). A majority of nighttime shifts were staffed by physicians who worked for the group for a single year. As a result of a prior intervention, hospitalists' patients were localized to specific units, each overseen by a hospitalist‐unit medical director.[9] We excluded all patients initially admitted to other services (eg, intensive care unit, surgical services) and patients discharged from other services.

Hospitalist Communication Skills Training Program

Northwestern Memorial Hospital implemented a communication‐skills training program in 2009 intended to enhance patient experience and improve patient‐satisfaction scores. All nonphysician staff were required to attend a 4‐hour training session based on the AIDET (Acknowledge, Introduce, Duration, Explanation, and Thank You) principles developed by the Studer Group.[10] The Studer Group is a well‐known healthcare consulting firm that aims to assist healthcare organizations to improve clinical, operational, and financial outcomes. The acronym AIDET provides a framework for communication‐skills behaviors (Table 1).

AIDET Elements, Explanations, and Examples
AIDET ElementExplanationExamples
  • NOTE: Abbreviations: AIDET, Acknowledge, Introduce, Duration, Explanation, and Thank You; ICU, intensive care unit; IR, interventional radiology; PRN, as needed.

AcknowledgeUse appropriate greeting, smile, and make eye contact.Knock on patient's door. Hello, may I come in now?
Respect privacy: Knock and ask for permission before entering. Use curtains/doors appropriately.Good morning. Is it a good time to talk?
Position yourself on the same level as the patient.Who do you have here with you today?
Do not ignore others in the room (visitors or colleagues). 
IntroduceIntroduce yourself by name and role.My name is Dr. Smith and I am your hospitalist physician. I'll be taking care of you while you are in the hospital.
Introduce any accompanying members of your team.When on teaching service: I'm the supervising physician or I'm the physician in charge of your care.
Address patients by title and last name (eg, Mrs. Smith) unless given permission to use first name.
Explain why you are there.
Do not assume patients remember your name or role.
DurationProvide specific information on when you will be available, or when you will be back.I'll be back between 2 and 3 pm, so if you think of any additional questions I can answer them then.
For tests/procedures: Explain how long it will take. Provide a time range for when it will happen.In my experience, the test I am ordering for you will be done within the next 12 to 24 hours.
Provide updates to the patient if the expected wait time has changed.I should have the results for this test when I see you tomorrow morning.
Do not blame another department or staff for delays. 
ExplanationExplain your rationale for decisions.I have ordered this test because
Use terms the patient can understand.The possible side effects of this medication include
Explain next steps/summarize plan for the day.What questions do you have?
Confirm understanding using teach back.What are you most concerned about?
Assume patients have questions and/or concerns.I want to make sure you understood everything. Can you tell me in your own words what you will need to do once you are at home?
Do not use acronyms that patients may not understand (eg, PRN, IR, ICU).
Thank youThank the patient and/or family.I really appreciate you telling me about your symptoms. I know you told several people before.
Ask if there is anything else you can do for the patient.Thank you for giving me the opportunity to care for you. What else can I do for you today?
Explain when you will be back and how the patient can reach you if needed.I'll see you again tomorrow morning. If you need me before then, just ask the nurse to page me.
Do not appear rushed or distracted when ending your interaction.

We adapted the AIDET framework and designed a communication‐skills training program, specifically for physicians, to emphasize reflection on current communication behaviors, deliberate practice of enhanced communication skills, and feedback based on performance during simulated and real clinical encounters. These educational methods are consistent with recommended strategies to improve behavioral performance.[11] During the first session, we discussed measurement of patient satisfaction, introduced AIDET principles, gave examples of specific behaviors for each principle, and had participants view 2 short videos displaying a range of communication skills followed by facilitated debriefing.[12] The second session included 3 simulation‐based exercises. Participants rotated roles in the scenarios (eg, patient, family member, physician) and facilitated debriefing was co‐led by a hospitalist leader (K.J.O.) and a patient‐experience administrative leader (either T.D. or J.R.). The third session involved direct observation of participants' clinical encounters and immediate feedback. This coaching session was performed for an initial group of 5 hospitalist‐unit medical directors by the manager of patient experience (T.D.) and subsequently by these medical directors for the remaining participants in the program. Each of the 3 sessions lasted 90 minutes. Instructional materials are available from the authors upon request.

The communication‐skills training program began in August 2011 and extended through January 2012. Participation was strongly encouraged but not mandatory. Sessions were offered multiple times to accommodate clinical schedules. One of the co‐investigators took attendance at each session to assess participation rates.

Survey Instruments and Data

During the study period, NMH used a third‐party vendor, Press Ganey Associates, Inc., to administer the HCAHPS survey to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey has 27 total questions, including 3 questions assessing doctor communication as a domain.[3] In addition to the HCAHPS questions, the survey administered to NMH patients included questions developed by Press Ganey. Questions in the surveys used ordinal response scales. Specifically, response options for HCAHPS doctor‐communication questions were never, sometimes, usually, and always. Response options for Press Ganey doctor‐communication questions were very poor, poor, fair, good, and very good. Patients provided an overall hospital rating in the HCAHPS survey using a 010 scale, with 0=worst hospital possible and 10=best hospital possible.

We defined the preintervention period as the 26 weeks prior to implementation of the communication‐skills program (patients admitted on or between January 31, 2011, and July 31, 2011) and the postintervention period as the 22 weeks after implementation (patients admitted on or between January 31, 2012, and June 30, 2012). The postintervention period was 1 month shorter than the preintervention period in an effort to avoid confounding due to a number of new hospitalists starting in July 2012. We defined a discharge attending as highly trained if he/she attended all 3 sessions of the communication‐skills training program. The discharge attending was designated as no/low training if he/she attended fewer than the full 3 sessions.

Data Analysis

Data were obtained from the Northwestern Medicine Enterprise Data Warehouse, a single, integrated database of all clinical and research data from all patients receiving treatment through Northwestern University healthcare affiliates. We used 2 and Student t tests to compare patient demographic characteristics preintervention vs postintervention. We used 2 tests to compare the percentage of patients giving top‐box ratings to each doctor‐communication question (ie, always for HCAHPS and very good for Press Ganey) and giving an overall hospital rating of 9 or 10. We used top‐box comparisons, rather than comparison of mean or median scores, because patient‐satisfaction data are typically highly skewed toward favorable responses. This approach is consistent with prior HCAHPS research.[4, 5] We calculated composite doctor‐communication scores as the proportion of top‐box responses across items in each survey (ie, HCAHPS and Press Ganey). We first compared all patients during the preintervention and postintervention period. We then identified patients for whom the discharge attending worked as a hospitalist at NMH during both the preintervention and postintervention periods and compared satisfaction for patients discharged by hospitalists who had no/low training and for patients discharged by hospitalists who were highly trained. We performed multivariate logistic regression, using intervention period as the predictor variable and top‐box rating as the outcome variable for each doctor‐communication question and for overall hospital rating of 9 or 10. Covariates included patient age, sex, race, payer, self‐reported education level, and self‐reported health status. Models accounted for clustering of patients within discharge physicians. Similarly, we conducted multivariate logistic regression, using discharge attending category as the predictor variable (no/low training vs highly trained). The various comparisons described were intended to mimic intention to treat and treatment received analyses in light of incomplete participation in the communication‐skills program. All analyses were conducted using Stata version 11.2 (StataCorp, College Station, TX).

RESULTS

Overall, 61 (97%) of 63 hospitalists completed the first session, 44 (70%) completed the second session, and 25 (40%) completed the third session of program. Patient‐satisfaction data were available for 278 patients during the preintervention period and 186 patients during the postintervention period. Patient demographic characteristics were similar for the 2 periods (Table 2).

Patient Characteristics
CharacteristicPreintervention (n=278)Postintervention (n=186)P Value
  • NOTE: Abbreviations: SD, standard deviation.

Mean age, y (SD)62.8 (17.0)61.6 (17.6)0.45
Female, no. (%)155 (55.8)114 (61.3)0.24
Nonwhite race, no. (%)87 (32.2)53 (29.1)0.48
Highest education completed, no. (%)   
Did not complete high school12 (4.6)6 (3.3)0.45
High school110 (41.7)81 (44.0) 
4‐year college50 (18.9)43 (23.4) 
Advanced degree92 (34.9)54 (29.4) 
Payer, no. (%)   
Medicare137 (49.3)89 (47.9)0.83
Private113 (40.7)73 (39.3) 
Medicaid13 (4.7)11 (5.9) 
Self‐pay/other15 (5.4)13 (7.0) 
Self‐reported health status, no. (%)   
Poor19 (7.1)18 (9.8)0.41
Fair53 (19.7)43 (23.4) 
Good89 (33.1)57 (31.0) 
Very good89 (33.1)49 (26.6) 
Excellent19 (7.1)17 (9.2) 

Patient Satisfaction With Hospitalist Communication

The HCAHPS and Press Ganey doctor communication domain scores were not significantly different between the preintervention and postintervention periods (75.8 vs 79.2, P=0.42 and 61.4 vs 65.9, P=0.39). Two of the 3 HCAHPS items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant (Table 3). Similarly, all 5 of the Press Ganey items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant. The HCAHPS overall rating of hospital care was also not significantly different between the preintervention and postintervention period. Results were similar in multivariate analyses, with no items showing statistically significant differences between the preintervention and postintervention periods.

Preintervention vs Postintervention Comparison of Top‐Box Patient‐Satisfaction Ratings
 Unadjusted AnalysisaAdjusted Analysis
 Preintervention, No. (%) [n=270277]Postintervention, No. (%) [n=183186]P ValueOR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; OR, odds ratio.

  • Data represent the number and percentage of respondents giving highest rating (top box) for each question; denominators vary slightly due to missing data.

HCAHPS doctor‐communication domain     
How often did doctors treat you with courtesy and respect?224 (83)160 (86)0.311.23 (0.81‐2.44)0.22
How often did doctors listen carefully to you?205 (75)145 (78)0.521.22 (0.74‐2.04)0.42
How often did doctors explain things in a way you could understand?203 (75)137 (74)0.840.98 (0.59‐1.64)0.94
Press Ganey physician‐communication domain     
Skill of physician189 (68)137 (74)0.191.38 (0.82‐2.31)0.22
Physician's concern for your questions and worries157 (57)117 (64)0.141.30 (0.79‐2.12)0.30
How well physician kept you informed158 (58)114 (62)0.361.15 (0.78‐1.72)0.71
Time physician spent with you140 (51)101 (54)0.431.12 (0.66‐1.89)0.67
Friendliness/courtesy of physician198 (71)136 (74)0.571.20 (0.74‐1.94)0.46
HCAHPS global ratings     
Overall rating of hospital189 (70) [n=270]137 (74) [n=186]0.401.33 (0.82‐2.17)0.24

Pre‐post comparisons based on level of hospitalist participation in the training program are shown in Table 4. For patients discharged by no/low‐training hospitalists, 4 of the 8 total items assessing doctor communication were rated higher during the postintervention period, and 4 were rated lower, but no result was statistically significant. For patients discharged by highly trained hospitalists, all 8 items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant. Multivariate analyses were similar, with no items showing statistically significant differences between the preintervention and postintervention periods for either group.

Comparison of Top‐Box Patient‐Satisfaction Ratings by Discharge Hospitalist Participation
 No/Low TrainingHighly Trained
 Unadjusted AnalysisaAdjusted AnalysisUnadjusted AnalysisaAdjusted Analysis
 Preintervention, No. (%) [n=151156]Postintervention, No. (%) [n=6770]P ValueOR (95% CI)P ValuePreintervention, No. (%) [n=119122]Postintervention, No. (%) [n=115116]P ValueOR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; OR, odds ratio.

  • Data represent the number and percentage of respondents giving highest rating (top box) for each question; denominators vary slightly due to missing data.

HCAHPS doctor‐ communication domain          
How often did doctors treat you with courtesy and respect?125 (83)61 (88)0.281.79 (0.82‐3.89)0.1499 (83)99 (85)0.651.33 (0.62‐2.91)0.46
How often did doctors listen carefully to you?116 (77)53 (76)0.861.08 (0.49‐2.38)0.1989 (74)92 (79)0.301.43 (0.76‐2.69)0.27
How often did doctors explain things in a way you could understand?115 (76)47 (68)0.240.59 (0.27‐1.28)0.1888 (74)90 (78)0.521.31 (0.68‐2.50)0.42
Press Ganey physician‐communication domain          
Skill of physician110 (71)52 (74)0.561.32 (0.78‐2.22)0.3179 (65)85 (73)0.161.45 (0.65‐3.27)0.37
Physician's concern for your questions and worries92 (60)41 (61)0.881.00 (0.59‐1.77)0.9965 (53)76 (66)0.061.71 (0.81‐3.60)0.16
How well physician kept you informed89 (59)42 (61)0.751.16 (0.64‐2.08)0.6269 (57)72 (63)0.341.29 (0.75‐2.20)0.35
Time physician spent with you83 (54)37 (53)0.920.87 (0.47‐1.61)0.6557 (47)64 (55)0.191.44 (0.64‐3.21)0.38
Friendliness/courtesy of physician116 (75)45 (66)0.180.72 (0.37‐1.38)0.3282 (67)91 (78)0.051.89 (0.97‐3.68)0.60
HCAHPS global ratings          
Overall rating of hospital109 (73)53 (75)0.631.37 (0.67‐2.81)0.3986 (71)90 (78)0.211.60 (0.73‐3.53)0.24

DISCUSSION

We found no significant improvement in patient satisfaction with doctor communication or overall rating of hospital care after implementation of a communication‐skills training program for hospitalists. There are several potential explanations for our results. First, though we used sound educational methods and attempted to replicate common clinical scenarios during simulation exercises, our program may not have resulted in improved communication behaviors during actual clinical care. We attempted to balance instructional methods that would result in behavioral change with a feasible investment of time and effort on the part of our learners (ie, practicing hospitalists). It is possible that additional time, feedback, and practice of communication skills would be necessary to change behaviors in the clinical setting. However, prior communication‐skills interventions have similarly struggled to show an impact on patient satisfaction.[13, 14] Second, we had incomplete participation in the program, with only 40% of hospitalists completing all 3 planned sessions. We encouraged all hospitalists, regardless of job type, to participate in the program. Participation rates were lower for 1‐year hospitalists compared with career hospitalists. The results of our analyses based on level of hospitalist participation in the training program, although not achieving statistical significance, suggest a greater effect of the program with higher degrees of participation.

Most important, the study was likely underpowered to detect a statistically significant difference in satisfaction results. Leaders were committed to providing communication‐skills training throughout our organization. We did not know the magnitude of potential improvement in satisfaction scores that might arise from our efforts, and therefore we did not conduct power calculations before designing and implementing the training program. Our HCAHPS composite doctor‐communication domain performance was 76% during the preintervention period and 79% during the postintervention period. Assuming an absolute 3% improvement is indeed possible, we would have needed >3000 patients in each period to have 80% power to detect a significant difference. Similarly, we would have needed >2000 patients during each period to have 80% power to detect an absolute 4% improvement in global rating of hospital care.

In an attempt to discern whether our favorable results were due to secular trends, we conducted post hoc analyses of HCAHPS nurse‐communication and hospital‐environment domains for the preintervention vs postintervention periods. Two of the 3 nurse‐communication items were rated lower during the postintervention period, but no result was statistically significant. Both hospital‐environment domain items were rated lower during the postintervention period, and 1 result was statistically significant (quiet at night). This post hoc evaluation lends additional support to the potential benefit of the communication‐skills training program.

The findings from this study represent an important issue for leaders attempting to improve quality performance within their organizations. What level of proof is needed before investing time and effort in implementing an intervention? With mounting pressure to improve performance, leaders are often left to make informed decisions based on data that fall short of scientifically rigorous evidence. Importantly, an increase in composite doctor‐communication ratings from 76% to 79% would translate into an improvement from the 25th percentile to 50th‐percentile performance in the fiscal‐year 2011 Press Ganey University Healthcare Consortium benchmark comparison (based on surveys received from September 1, 2010, to August 31, 2011).[15]

Our study has several limitations. First, we assessed an intervention on a single service in a single hospital. Generalizability may be limited, as hospital medicine groups, hospitals, and the patients they serve vary. Second, our intervention was based on a framework (ie, AIDET) that has face validity but has not undergone extensive study to confirm that the underlying constructs, and the behaviors related to them, are tightly linked to patient satisfaction. Third, as previously mentioned, we were likely underpowered to detect a significant improvement in satisfaction resulting from our intervention. Incomplete participation in the training program may have also limited the effect of our intervention. Finally, our comparisons by hospitalist level of participation were based on the discharging physician. Attribution of a patient response to a single physician is problematic because many patients encounter more than 1 hospitalist and 1 or more specialist physicians during their stay.

CONCLUSION

In summary, we found improvements in patient satisfaction with doctor communication, which were not statistically significant, after implementation of a communication‐skills training program for hospitalists. Larger studies are needed to assess whether a communication‐skills training program can truly improve patient satisfaction with doctor communication and overall hospital care.

Acknowledgments

The authors express their gratitude to the hospitalists involved in this program, especially Eric Schaefer, Nita Kulkarni, Stevie Mazyck, Rachel Cyrus, and Hiren Shah. The authors also thank Nicholas Christensen for assistance in data acquisition.

Disclosures: Nothing to report.

Files
References
  1. Becker G, Kempf DE, Xander CJ, Momm F, Olschewski M, Blum HE. Four minutes for a patient, twenty seconds for a relative—an observational study at a university hospital. BMC Health Serv Res. 2010;10:94.
  2. O'Leary KJ, Liebovitz DM, Baker DW. How hospitalists spend their time: insights on efficiency and safety. J Hosp Med. 2006;1(2):8893.
  3. CAHPS: Surveys and Tools to Advance Patient‐Centered Care. Available at: http://cahps.ahrq.gov. Accessed July 12, 2012.
  4. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67(1):2737.
  5. Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients' perspective: an overview of the CAHPS Hospital Survey development process. Health Serv Res. 2005;40(6 part 2):19771995.
  6. US Department of Health and Human Services. Hospital Compare. Available at: http://hospitalcompare.hhs.gov/. Accessed November 5, 2012.
  7. Center for Medicare and Medicaid Services. Hospital Value Based Purchasing Program. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html?redirect=/hospital‐value‐based‐purchasing. Accessed August 1, 2012.
  8. Rao JK, Anderson LA, Inui TS, Frankel RM. Communication interventions make a difference in conversations between physicians and patients: a systematic review of the evidence. Med Care. 2007;45(4):340349.
  9. O'Leary KJ, Wayne DB, Landler MP, et al. Impact of localizing physicians to hospital units on nurse‐physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  10. Studer Group. Acknowledge, Introduce, Duration, Explanation and Thank You. Available at: http://www.studergroup.com/aidet. Accessed November 5, 2012.
  11. Kern DE, Thomas PA, Bass EB, Howard DM, eds. Curriculum Development for Medical Education: A Six‐Step Approach. Baltimore, MD: Johns Hopkins University Press; 1998.
  12. Vanderbilt University Medical Center and Studer Group. Building Patient Trust with AIDET®: Clinical Excellence with Patient Compliance Through Effective Communication. Gulf Breeze, FL: Fire Starter Publishing; 2008.
  13. Brown JB, Boles M, Mullooly JP, Levinson W. Effect of clinician communication skills training on patient satisfaction: a randomized, controlled trial. Ann Intern Med. 1999;131(11):822829.
  14. Fossli Jensen B, Gulbrandsen P, Dahl FA, Krupat E, Frankel RM, Finset A. Effectiveness of a short course in clinical communication skills for hospital doctors: results of a crossover randomized controlled trial (ISRCTN22153332). Patient Educ Couns. 2010;84(2):163169.
  15. Press Ganey HCAHPS Top Box and Rank Report, Fiscal Year 2011. Inpatient, University Healthcare Consortium Peer Group. South Bend, IN: Press Ganey Associates; 2011.
Article PDF
Issue
Journal of Hospital Medicine - 8(6)
Page Number
315-320
Sections
Files
Files
Article PDF
Article PDF

Hospital settings present unique challenges to patient‐clinician communication and collaboration. Patients frequently have multiple, active conditions. Interprofessional teams are large and care for multiple patients at the same time, and team membership is dynamic and dispersed. Moreover, physicians spend relatively little time with patients[1, 2] and seldom receive training in communication skills after medical school.

The Agency for Healthcare Research and Quality (AHRQ) has developed the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey to assess hospitalized patients' experiences with care.[3, 4, 5] Results are publicly reported on the US Department of Health and Human Services Hospital Compare Web site[6] and now affect hospital payment through the Center for Medicare and Medicaid Services Hospital Value‐Based Purchasing Program.[7]

Despite this increased transparency and accountability for performance related to the patient experience, little research has been conducted on how hospitals or clinicians might improve performance. Although interventions to enhance physician communication skills have shown improvements in observed behaviors, few studies have assessed benefit from the patient's perspective and few interventions have been integrated into practice.[8] We sought to assess the impact of a communication‐skills training program, based on a common framework used by hospitals, on patient satisfaction with doctor communication and overall hospital care.

METHODS

Setting and Study Design

The study was conducted at Northwestern Memorial Hospital (NMH), an 897‐bed tertiary‐care teaching hospital in Chicago, IL, and was approved by the institutional review board of Northwestern University. This study was a preintervention vs postintervention comparison of patient‐satisfaction scores. The intervention was a communication‐skills training program for all NMH hospitalists. We compared patient‐satisfaction survey data for patients admitted to the nonteaching hospitalist service during the 26 weeks prior to the intervention with data for patients admitted to the same service during the 22 weeks afterward. Hospitalists on this service worked 7 consecutive days, usually followed by 7 days free from clinical duty. Hospitalists cared for approximately 1014 patients per day without the assistance of resident physicians or midlevel providers (ie, physician assistants or nurse practitioners). Nighttime patient care was provided by in‐house hospitalists (ie, nocturnists). A majority of nighttime shifts were staffed by physicians who worked for the group for a single year. As a result of a prior intervention, hospitalists' patients were localized to specific units, each overseen by a hospitalist‐unit medical director.[9] We excluded all patients initially admitted to other services (eg, intensive care unit, surgical services) and patients discharged from other services.

Hospitalist Communication Skills Training Program

Northwestern Memorial Hospital implemented a communication‐skills training program in 2009 intended to enhance patient experience and improve patient‐satisfaction scores. All nonphysician staff were required to attend a 4‐hour training session based on the AIDET (Acknowledge, Introduce, Duration, Explanation, and Thank You) principles developed by the Studer Group.[10] The Studer Group is a well‐known healthcare consulting firm that aims to assist healthcare organizations to improve clinical, operational, and financial outcomes. The acronym AIDET provides a framework for communication‐skills behaviors (Table 1).

AIDET Elements, Explanations, and Examples
AIDET ElementExplanationExamples
  • NOTE: Abbreviations: AIDET, Acknowledge, Introduce, Duration, Explanation, and Thank You; ICU, intensive care unit; IR, interventional radiology; PRN, as needed.

AcknowledgeUse appropriate greeting, smile, and make eye contact.Knock on patient's door. Hello, may I come in now?
Respect privacy: Knock and ask for permission before entering. Use curtains/doors appropriately.Good morning. Is it a good time to talk?
Position yourself on the same level as the patient.Who do you have here with you today?
Do not ignore others in the room (visitors or colleagues). 
IntroduceIntroduce yourself by name and role.My name is Dr. Smith and I am your hospitalist physician. I'll be taking care of you while you are in the hospital.
Introduce any accompanying members of your team.When on teaching service: I'm the supervising physician or I'm the physician in charge of your care.
Address patients by title and last name (eg, Mrs. Smith) unless given permission to use first name.
Explain why you are there.
Do not assume patients remember your name or role.
DurationProvide specific information on when you will be available, or when you will be back.I'll be back between 2 and 3 pm, so if you think of any additional questions I can answer them then.
For tests/procedures: Explain how long it will take. Provide a time range for when it will happen.In my experience, the test I am ordering for you will be done within the next 12 to 24 hours.
Provide updates to the patient if the expected wait time has changed.I should have the results for this test when I see you tomorrow morning.
Do not blame another department or staff for delays. 
ExplanationExplain your rationale for decisions.I have ordered this test because
Use terms the patient can understand.The possible side effects of this medication include
Explain next steps/summarize plan for the day.What questions do you have?
Confirm understanding using teach back.What are you most concerned about?
Assume patients have questions and/or concerns.I want to make sure you understood everything. Can you tell me in your own words what you will need to do once you are at home?
Do not use acronyms that patients may not understand (eg, PRN, IR, ICU).
Thank youThank the patient and/or family.I really appreciate you telling me about your symptoms. I know you told several people before.
Ask if there is anything else you can do for the patient.Thank you for giving me the opportunity to care for you. What else can I do for you today?
Explain when you will be back and how the patient can reach you if needed.I'll see you again tomorrow morning. If you need me before then, just ask the nurse to page me.
Do not appear rushed or distracted when ending your interaction.

We adapted the AIDET framework and designed a communication‐skills training program, specifically for physicians, to emphasize reflection on current communication behaviors, deliberate practice of enhanced communication skills, and feedback based on performance during simulated and real clinical encounters. These educational methods are consistent with recommended strategies to improve behavioral performance.[11] During the first session, we discussed measurement of patient satisfaction, introduced AIDET principles, gave examples of specific behaviors for each principle, and had participants view 2 short videos displaying a range of communication skills followed by facilitated debriefing.[12] The second session included 3 simulation‐based exercises. Participants rotated roles in the scenarios (eg, patient, family member, physician) and facilitated debriefing was co‐led by a hospitalist leader (K.J.O.) and a patient‐experience administrative leader (either T.D. or J.R.). The third session involved direct observation of participants' clinical encounters and immediate feedback. This coaching session was performed for an initial group of 5 hospitalist‐unit medical directors by the manager of patient experience (T.D.) and subsequently by these medical directors for the remaining participants in the program. Each of the 3 sessions lasted 90 minutes. Instructional materials are available from the authors upon request.

The communication‐skills training program began in August 2011 and extended through January 2012. Participation was strongly encouraged but not mandatory. Sessions were offered multiple times to accommodate clinical schedules. One of the co‐investigators took attendance at each session to assess participation rates.

Survey Instruments and Data

During the study period, NMH used a third‐party vendor, Press Ganey Associates, Inc., to administer the HCAHPS survey to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey has 27 total questions, including 3 questions assessing doctor communication as a domain.[3] In addition to the HCAHPS questions, the survey administered to NMH patients included questions developed by Press Ganey. Questions in the surveys used ordinal response scales. Specifically, response options for HCAHPS doctor‐communication questions were never, sometimes, usually, and always. Response options for Press Ganey doctor‐communication questions were very poor, poor, fair, good, and very good. Patients provided an overall hospital rating in the HCAHPS survey using a 010 scale, with 0=worst hospital possible and 10=best hospital possible.

We defined the preintervention period as the 26 weeks prior to implementation of the communication‐skills program (patients admitted on or between January 31, 2011, and July 31, 2011) and the postintervention period as the 22 weeks after implementation (patients admitted on or between January 31, 2012, and June 30, 2012). The postintervention period was 1 month shorter than the preintervention period in an effort to avoid confounding due to a number of new hospitalists starting in July 2012. We defined a discharge attending as highly trained if he/she attended all 3 sessions of the communication‐skills training program. The discharge attending was designated as no/low training if he/she attended fewer than the full 3 sessions.

Data Analysis

Data were obtained from the Northwestern Medicine Enterprise Data Warehouse, a single, integrated database of all clinical and research data from all patients receiving treatment through Northwestern University healthcare affiliates. We used 2 and Student t tests to compare patient demographic characteristics preintervention vs postintervention. We used 2 tests to compare the percentage of patients giving top‐box ratings to each doctor‐communication question (ie, always for HCAHPS and very good for Press Ganey) and giving an overall hospital rating of 9 or 10. We used top‐box comparisons, rather than comparison of mean or median scores, because patient‐satisfaction data are typically highly skewed toward favorable responses. This approach is consistent with prior HCAHPS research.[4, 5] We calculated composite doctor‐communication scores as the proportion of top‐box responses across items in each survey (ie, HCAHPS and Press Ganey). We first compared all patients during the preintervention and postintervention period. We then identified patients for whom the discharge attending worked as a hospitalist at NMH during both the preintervention and postintervention periods and compared satisfaction for patients discharged by hospitalists who had no/low training and for patients discharged by hospitalists who were highly trained. We performed multivariate logistic regression, using intervention period as the predictor variable and top‐box rating as the outcome variable for each doctor‐communication question and for overall hospital rating of 9 or 10. Covariates included patient age, sex, race, payer, self‐reported education level, and self‐reported health status. Models accounted for clustering of patients within discharge physicians. Similarly, we conducted multivariate logistic regression, using discharge attending category as the predictor variable (no/low training vs highly trained). The various comparisons described were intended to mimic intention to treat and treatment received analyses in light of incomplete participation in the communication‐skills program. All analyses were conducted using Stata version 11.2 (StataCorp, College Station, TX).

RESULTS

Overall, 61 (97%) of 63 hospitalists completed the first session, 44 (70%) completed the second session, and 25 (40%) completed the third session of program. Patient‐satisfaction data were available for 278 patients during the preintervention period and 186 patients during the postintervention period. Patient demographic characteristics were similar for the 2 periods (Table 2).

Patient Characteristics
CharacteristicPreintervention (n=278)Postintervention (n=186)P Value
  • NOTE: Abbreviations: SD, standard deviation.

Mean age, y (SD)62.8 (17.0)61.6 (17.6)0.45
Female, no. (%)155 (55.8)114 (61.3)0.24
Nonwhite race, no. (%)87 (32.2)53 (29.1)0.48
Highest education completed, no. (%)   
Did not complete high school12 (4.6)6 (3.3)0.45
High school110 (41.7)81 (44.0) 
4‐year college50 (18.9)43 (23.4) 
Advanced degree92 (34.9)54 (29.4) 
Payer, no. (%)   
Medicare137 (49.3)89 (47.9)0.83
Private113 (40.7)73 (39.3) 
Medicaid13 (4.7)11 (5.9) 
Self‐pay/other15 (5.4)13 (7.0) 
Self‐reported health status, no. (%)   
Poor19 (7.1)18 (9.8)0.41
Fair53 (19.7)43 (23.4) 
Good89 (33.1)57 (31.0) 
Very good89 (33.1)49 (26.6) 
Excellent19 (7.1)17 (9.2) 

Patient Satisfaction With Hospitalist Communication

The HCAHPS and Press Ganey doctor communication domain scores were not significantly different between the preintervention and postintervention periods (75.8 vs 79.2, P=0.42 and 61.4 vs 65.9, P=0.39). Two of the 3 HCAHPS items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant (Table 3). Similarly, all 5 of the Press Ganey items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant. The HCAHPS overall rating of hospital care was also not significantly different between the preintervention and postintervention period. Results were similar in multivariate analyses, with no items showing statistically significant differences between the preintervention and postintervention periods.

Preintervention vs Postintervention Comparison of Top‐Box Patient‐Satisfaction Ratings
 Unadjusted AnalysisaAdjusted Analysis
 Preintervention, No. (%) [n=270277]Postintervention, No. (%) [n=183186]P ValueOR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; OR, odds ratio.

  • Data represent the number and percentage of respondents giving highest rating (top box) for each question; denominators vary slightly due to missing data.

HCAHPS doctor‐communication domain     
How often did doctors treat you with courtesy and respect?224 (83)160 (86)0.311.23 (0.81‐2.44)0.22
How often did doctors listen carefully to you?205 (75)145 (78)0.521.22 (0.74‐2.04)0.42
How often did doctors explain things in a way you could understand?203 (75)137 (74)0.840.98 (0.59‐1.64)0.94
Press Ganey physician‐communication domain     
Skill of physician189 (68)137 (74)0.191.38 (0.82‐2.31)0.22
Physician's concern for your questions and worries157 (57)117 (64)0.141.30 (0.79‐2.12)0.30
How well physician kept you informed158 (58)114 (62)0.361.15 (0.78‐1.72)0.71
Time physician spent with you140 (51)101 (54)0.431.12 (0.66‐1.89)0.67
Friendliness/courtesy of physician198 (71)136 (74)0.571.20 (0.74‐1.94)0.46
HCAHPS global ratings     
Overall rating of hospital189 (70) [n=270]137 (74) [n=186]0.401.33 (0.82‐2.17)0.24

Pre‐post comparisons based on level of hospitalist participation in the training program are shown in Table 4. For patients discharged by no/low‐training hospitalists, 4 of the 8 total items assessing doctor communication were rated higher during the postintervention period, and 4 were rated lower, but no result was statistically significant. For patients discharged by highly trained hospitalists, all 8 items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant. Multivariate analyses were similar, with no items showing statistically significant differences between the preintervention and postintervention periods for either group.

Comparison of Top‐Box Patient‐Satisfaction Ratings by Discharge Hospitalist Participation
 No/Low TrainingHighly Trained
 Unadjusted AnalysisaAdjusted AnalysisUnadjusted AnalysisaAdjusted Analysis
 Preintervention, No. (%) [n=151156]Postintervention, No. (%) [n=6770]P ValueOR (95% CI)P ValuePreintervention, No. (%) [n=119122]Postintervention, No. (%) [n=115116]P ValueOR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; OR, odds ratio.

  • Data represent the number and percentage of respondents giving highest rating (top box) for each question; denominators vary slightly due to missing data.

HCAHPS doctor‐ communication domain          
How often did doctors treat you with courtesy and respect?125 (83)61 (88)0.281.79 (0.82‐3.89)0.1499 (83)99 (85)0.651.33 (0.62‐2.91)0.46
How often did doctors listen carefully to you?116 (77)53 (76)0.861.08 (0.49‐2.38)0.1989 (74)92 (79)0.301.43 (0.76‐2.69)0.27
How often did doctors explain things in a way you could understand?115 (76)47 (68)0.240.59 (0.27‐1.28)0.1888 (74)90 (78)0.521.31 (0.68‐2.50)0.42
Press Ganey physician‐communication domain          
Skill of physician110 (71)52 (74)0.561.32 (0.78‐2.22)0.3179 (65)85 (73)0.161.45 (0.65‐3.27)0.37
Physician's concern for your questions and worries92 (60)41 (61)0.881.00 (0.59‐1.77)0.9965 (53)76 (66)0.061.71 (0.81‐3.60)0.16
How well physician kept you informed89 (59)42 (61)0.751.16 (0.64‐2.08)0.6269 (57)72 (63)0.341.29 (0.75‐2.20)0.35
Time physician spent with you83 (54)37 (53)0.920.87 (0.47‐1.61)0.6557 (47)64 (55)0.191.44 (0.64‐3.21)0.38
Friendliness/courtesy of physician116 (75)45 (66)0.180.72 (0.37‐1.38)0.3282 (67)91 (78)0.051.89 (0.97‐3.68)0.60
HCAHPS global ratings          
Overall rating of hospital109 (73)53 (75)0.631.37 (0.67‐2.81)0.3986 (71)90 (78)0.211.60 (0.73‐3.53)0.24

DISCUSSION

We found no significant improvement in patient satisfaction with doctor communication or overall rating of hospital care after implementation of a communication‐skills training program for hospitalists. There are several potential explanations for our results. First, though we used sound educational methods and attempted to replicate common clinical scenarios during simulation exercises, our program may not have resulted in improved communication behaviors during actual clinical care. We attempted to balance instructional methods that would result in behavioral change with a feasible investment of time and effort on the part of our learners (ie, practicing hospitalists). It is possible that additional time, feedback, and practice of communication skills would be necessary to change behaviors in the clinical setting. However, prior communication‐skills interventions have similarly struggled to show an impact on patient satisfaction.[13, 14] Second, we had incomplete participation in the program, with only 40% of hospitalists completing all 3 planned sessions. We encouraged all hospitalists, regardless of job type, to participate in the program. Participation rates were lower for 1‐year hospitalists compared with career hospitalists. The results of our analyses based on level of hospitalist participation in the training program, although not achieving statistical significance, suggest a greater effect of the program with higher degrees of participation.

Most important, the study was likely underpowered to detect a statistically significant difference in satisfaction results. Leaders were committed to providing communication‐skills training throughout our organization. We did not know the magnitude of potential improvement in satisfaction scores that might arise from our efforts, and therefore we did not conduct power calculations before designing and implementing the training program. Our HCAHPS composite doctor‐communication domain performance was 76% during the preintervention period and 79% during the postintervention period. Assuming an absolute 3% improvement is indeed possible, we would have needed >3000 patients in each period to have 80% power to detect a significant difference. Similarly, we would have needed >2000 patients during each period to have 80% power to detect an absolute 4% improvement in global rating of hospital care.

In an attempt to discern whether our favorable results were due to secular trends, we conducted post hoc analyses of HCAHPS nurse‐communication and hospital‐environment domains for the preintervention vs postintervention periods. Two of the 3 nurse‐communication items were rated lower during the postintervention period, but no result was statistically significant. Both hospital‐environment domain items were rated lower during the postintervention period, and 1 result was statistically significant (quiet at night). This post hoc evaluation lends additional support to the potential benefit of the communication‐skills training program.

The findings from this study represent an important issue for leaders attempting to improve quality performance within their organizations. What level of proof is needed before investing time and effort in implementing an intervention? With mounting pressure to improve performance, leaders are often left to make informed decisions based on data that fall short of scientifically rigorous evidence. Importantly, an increase in composite doctor‐communication ratings from 76% to 79% would translate into an improvement from the 25th percentile to 50th‐percentile performance in the fiscal‐year 2011 Press Ganey University Healthcare Consortium benchmark comparison (based on surveys received from September 1, 2010, to August 31, 2011).[15]

Our study has several limitations. First, we assessed an intervention on a single service in a single hospital. Generalizability may be limited, as hospital medicine groups, hospitals, and the patients they serve vary. Second, our intervention was based on a framework (ie, AIDET) that has face validity but has not undergone extensive study to confirm that the underlying constructs, and the behaviors related to them, are tightly linked to patient satisfaction. Third, as previously mentioned, we were likely underpowered to detect a significant improvement in satisfaction resulting from our intervention. Incomplete participation in the training program may have also limited the effect of our intervention. Finally, our comparisons by hospitalist level of participation were based on the discharging physician. Attribution of a patient response to a single physician is problematic because many patients encounter more than 1 hospitalist and 1 or more specialist physicians during their stay.

CONCLUSION

In summary, we found improvements in patient satisfaction with doctor communication, which were not statistically significant, after implementation of a communication‐skills training program for hospitalists. Larger studies are needed to assess whether a communication‐skills training program can truly improve patient satisfaction with doctor communication and overall hospital care.

Acknowledgments

The authors express their gratitude to the hospitalists involved in this program, especially Eric Schaefer, Nita Kulkarni, Stevie Mazyck, Rachel Cyrus, and Hiren Shah. The authors also thank Nicholas Christensen for assistance in data acquisition.

Disclosures: Nothing to report.

Hospital settings present unique challenges to patient‐clinician communication and collaboration. Patients frequently have multiple, active conditions. Interprofessional teams are large and care for multiple patients at the same time, and team membership is dynamic and dispersed. Moreover, physicians spend relatively little time with patients[1, 2] and seldom receive training in communication skills after medical school.

The Agency for Healthcare Research and Quality (AHRQ) has developed the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey to assess hospitalized patients' experiences with care.[3, 4, 5] Results are publicly reported on the US Department of Health and Human Services Hospital Compare Web site[6] and now affect hospital payment through the Center for Medicare and Medicaid Services Hospital Value‐Based Purchasing Program.[7]

Despite this increased transparency and accountability for performance related to the patient experience, little research has been conducted on how hospitals or clinicians might improve performance. Although interventions to enhance physician communication skills have shown improvements in observed behaviors, few studies have assessed benefit from the patient's perspective and few interventions have been integrated into practice.[8] We sought to assess the impact of a communication‐skills training program, based on a common framework used by hospitals, on patient satisfaction with doctor communication and overall hospital care.

METHODS

Setting and Study Design

The study was conducted at Northwestern Memorial Hospital (NMH), an 897‐bed tertiary‐care teaching hospital in Chicago, IL, and was approved by the institutional review board of Northwestern University. This study was a preintervention vs postintervention comparison of patient‐satisfaction scores. The intervention was a communication‐skills training program for all NMH hospitalists. We compared patient‐satisfaction survey data for patients admitted to the nonteaching hospitalist service during the 26 weeks prior to the intervention with data for patients admitted to the same service during the 22 weeks afterward. Hospitalists on this service worked 7 consecutive days, usually followed by 7 days free from clinical duty. Hospitalists cared for approximately 1014 patients per day without the assistance of resident physicians or midlevel providers (ie, physician assistants or nurse practitioners). Nighttime patient care was provided by in‐house hospitalists (ie, nocturnists). A majority of nighttime shifts were staffed by physicians who worked for the group for a single year. As a result of a prior intervention, hospitalists' patients were localized to specific units, each overseen by a hospitalist‐unit medical director.[9] We excluded all patients initially admitted to other services (eg, intensive care unit, surgical services) and patients discharged from other services.

Hospitalist Communication Skills Training Program

Northwestern Memorial Hospital implemented a communication‐skills training program in 2009 intended to enhance patient experience and improve patient‐satisfaction scores. All nonphysician staff were required to attend a 4‐hour training session based on the AIDET (Acknowledge, Introduce, Duration, Explanation, and Thank You) principles developed by the Studer Group.[10] The Studer Group is a well‐known healthcare consulting firm that aims to assist healthcare organizations to improve clinical, operational, and financial outcomes. The acronym AIDET provides a framework for communication‐skills behaviors (Table 1).

AIDET Elements, Explanations, and Examples
AIDET ElementExplanationExamples
  • NOTE: Abbreviations: AIDET, Acknowledge, Introduce, Duration, Explanation, and Thank You; ICU, intensive care unit; IR, interventional radiology; PRN, as needed.

AcknowledgeUse appropriate greeting, smile, and make eye contact.Knock on patient's door. Hello, may I come in now?
Respect privacy: Knock and ask for permission before entering. Use curtains/doors appropriately.Good morning. Is it a good time to talk?
Position yourself on the same level as the patient.Who do you have here with you today?
Do not ignore others in the room (visitors or colleagues). 
IntroduceIntroduce yourself by name and role.My name is Dr. Smith and I am your hospitalist physician. I'll be taking care of you while you are in the hospital.
Introduce any accompanying members of your team.When on teaching service: I'm the supervising physician or I'm the physician in charge of your care.
Address patients by title and last name (eg, Mrs. Smith) unless given permission to use first name.
Explain why you are there.
Do not assume patients remember your name or role.
DurationProvide specific information on when you will be available, or when you will be back.I'll be back between 2 and 3 pm, so if you think of any additional questions I can answer them then.
For tests/procedures: Explain how long it will take. Provide a time range for when it will happen.In my experience, the test I am ordering for you will be done within the next 12 to 24 hours.
Provide updates to the patient if the expected wait time has changed.I should have the results for this test when I see you tomorrow morning.
Do not blame another department or staff for delays. 
ExplanationExplain your rationale for decisions.I have ordered this test because
Use terms the patient can understand.The possible side effects of this medication include
Explain next steps/summarize plan for the day.What questions do you have?
Confirm understanding using teach back.What are you most concerned about?
Assume patients have questions and/or concerns.I want to make sure you understood everything. Can you tell me in your own words what you will need to do once you are at home?
Do not use acronyms that patients may not understand (eg, PRN, IR, ICU).
Thank youThank the patient and/or family.I really appreciate you telling me about your symptoms. I know you told several people before.
Ask if there is anything else you can do for the patient.Thank you for giving me the opportunity to care for you. What else can I do for you today?
Explain when you will be back and how the patient can reach you if needed.I'll see you again tomorrow morning. If you need me before then, just ask the nurse to page me.
Do not appear rushed or distracted when ending your interaction.

We adapted the AIDET framework and designed a communication‐skills training program, specifically for physicians, to emphasize reflection on current communication behaviors, deliberate practice of enhanced communication skills, and feedback based on performance during simulated and real clinical encounters. These educational methods are consistent with recommended strategies to improve behavioral performance.[11] During the first session, we discussed measurement of patient satisfaction, introduced AIDET principles, gave examples of specific behaviors for each principle, and had participants view 2 short videos displaying a range of communication skills followed by facilitated debriefing.[12] The second session included 3 simulation‐based exercises. Participants rotated roles in the scenarios (eg, patient, family member, physician) and facilitated debriefing was co‐led by a hospitalist leader (K.J.O.) and a patient‐experience administrative leader (either T.D. or J.R.). The third session involved direct observation of participants' clinical encounters and immediate feedback. This coaching session was performed for an initial group of 5 hospitalist‐unit medical directors by the manager of patient experience (T.D.) and subsequently by these medical directors for the remaining participants in the program. Each of the 3 sessions lasted 90 minutes. Instructional materials are available from the authors upon request.

The communication‐skills training program began in August 2011 and extended through January 2012. Participation was strongly encouraged but not mandatory. Sessions were offered multiple times to accommodate clinical schedules. One of the co‐investigators took attendance at each session to assess participation rates.

Survey Instruments and Data

During the study period, NMH used a third‐party vendor, Press Ganey Associates, Inc., to administer the HCAHPS survey to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey has 27 total questions, including 3 questions assessing doctor communication as a domain.[3] In addition to the HCAHPS questions, the survey administered to NMH patients included questions developed by Press Ganey. Questions in the surveys used ordinal response scales. Specifically, response options for HCAHPS doctor‐communication questions were never, sometimes, usually, and always. Response options for Press Ganey doctor‐communication questions were very poor, poor, fair, good, and very good. Patients provided an overall hospital rating in the HCAHPS survey using a 010 scale, with 0=worst hospital possible and 10=best hospital possible.

We defined the preintervention period as the 26 weeks prior to implementation of the communication‐skills program (patients admitted on or between January 31, 2011, and July 31, 2011) and the postintervention period as the 22 weeks after implementation (patients admitted on or between January 31, 2012, and June 30, 2012). The postintervention period was 1 month shorter than the preintervention period in an effort to avoid confounding due to a number of new hospitalists starting in July 2012. We defined a discharge attending as highly trained if he/she attended all 3 sessions of the communication‐skills training program. The discharge attending was designated as no/low training if he/she attended fewer than the full 3 sessions.

Data Analysis

Data were obtained from the Northwestern Medicine Enterprise Data Warehouse, a single, integrated database of all clinical and research data from all patients receiving treatment through Northwestern University healthcare affiliates. We used 2 and Student t tests to compare patient demographic characteristics preintervention vs postintervention. We used 2 tests to compare the percentage of patients giving top‐box ratings to each doctor‐communication question (ie, always for HCAHPS and very good for Press Ganey) and giving an overall hospital rating of 9 or 10. We used top‐box comparisons, rather than comparison of mean or median scores, because patient‐satisfaction data are typically highly skewed toward favorable responses. This approach is consistent with prior HCAHPS research.[4, 5] We calculated composite doctor‐communication scores as the proportion of top‐box responses across items in each survey (ie, HCAHPS and Press Ganey). We first compared all patients during the preintervention and postintervention period. We then identified patients for whom the discharge attending worked as a hospitalist at NMH during both the preintervention and postintervention periods and compared satisfaction for patients discharged by hospitalists who had no/low training and for patients discharged by hospitalists who were highly trained. We performed multivariate logistic regression, using intervention period as the predictor variable and top‐box rating as the outcome variable for each doctor‐communication question and for overall hospital rating of 9 or 10. Covariates included patient age, sex, race, payer, self‐reported education level, and self‐reported health status. Models accounted for clustering of patients within discharge physicians. Similarly, we conducted multivariate logistic regression, using discharge attending category as the predictor variable (no/low training vs highly trained). The various comparisons described were intended to mimic intention to treat and treatment received analyses in light of incomplete participation in the communication‐skills program. All analyses were conducted using Stata version 11.2 (StataCorp, College Station, TX).

RESULTS

Overall, 61 (97%) of 63 hospitalists completed the first session, 44 (70%) completed the second session, and 25 (40%) completed the third session of program. Patient‐satisfaction data were available for 278 patients during the preintervention period and 186 patients during the postintervention period. Patient demographic characteristics were similar for the 2 periods (Table 2).

Patient Characteristics
CharacteristicPreintervention (n=278)Postintervention (n=186)P Value
  • NOTE: Abbreviations: SD, standard deviation.

Mean age, y (SD)62.8 (17.0)61.6 (17.6)0.45
Female, no. (%)155 (55.8)114 (61.3)0.24
Nonwhite race, no. (%)87 (32.2)53 (29.1)0.48
Highest education completed, no. (%)   
Did not complete high school12 (4.6)6 (3.3)0.45
High school110 (41.7)81 (44.0) 
4‐year college50 (18.9)43 (23.4) 
Advanced degree92 (34.9)54 (29.4) 
Payer, no. (%)   
Medicare137 (49.3)89 (47.9)0.83
Private113 (40.7)73 (39.3) 
Medicaid13 (4.7)11 (5.9) 
Self‐pay/other15 (5.4)13 (7.0) 
Self‐reported health status, no. (%)   
Poor19 (7.1)18 (9.8)0.41
Fair53 (19.7)43 (23.4) 
Good89 (33.1)57 (31.0) 
Very good89 (33.1)49 (26.6) 
Excellent19 (7.1)17 (9.2) 

Patient Satisfaction With Hospitalist Communication

The HCAHPS and Press Ganey doctor communication domain scores were not significantly different between the preintervention and postintervention periods (75.8 vs 79.2, P=0.42 and 61.4 vs 65.9, P=0.39). Two of the 3 HCAHPS items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant (Table 3). Similarly, all 5 of the Press Ganey items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant. The HCAHPS overall rating of hospital care was also not significantly different between the preintervention and postintervention period. Results were similar in multivariate analyses, with no items showing statistically significant differences between the preintervention and postintervention periods.

Preintervention vs Postintervention Comparison of Top‐Box Patient‐Satisfaction Ratings
 Unadjusted AnalysisaAdjusted Analysis
 Preintervention, No. (%) [n=270277]Postintervention, No. (%) [n=183186]P ValueOR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; OR, odds ratio.

  • Data represent the number and percentage of respondents giving highest rating (top box) for each question; denominators vary slightly due to missing data.

HCAHPS doctor‐communication domain     
How often did doctors treat you with courtesy and respect?224 (83)160 (86)0.311.23 (0.81‐2.44)0.22
How often did doctors listen carefully to you?205 (75)145 (78)0.521.22 (0.74‐2.04)0.42
How often did doctors explain things in a way you could understand?203 (75)137 (74)0.840.98 (0.59‐1.64)0.94
Press Ganey physician‐communication domain     
Skill of physician189 (68)137 (74)0.191.38 (0.82‐2.31)0.22
Physician's concern for your questions and worries157 (57)117 (64)0.141.30 (0.79‐2.12)0.30
How well physician kept you informed158 (58)114 (62)0.361.15 (0.78‐1.72)0.71
Time physician spent with you140 (51)101 (54)0.431.12 (0.66‐1.89)0.67
Friendliness/courtesy of physician198 (71)136 (74)0.571.20 (0.74‐1.94)0.46
HCAHPS global ratings     
Overall rating of hospital189 (70) [n=270]137 (74) [n=186]0.401.33 (0.82‐2.17)0.24

Pre‐post comparisons based on level of hospitalist participation in the training program are shown in Table 4. For patients discharged by no/low‐training hospitalists, 4 of the 8 total items assessing doctor communication were rated higher during the postintervention period, and 4 were rated lower, but no result was statistically significant. For patients discharged by highly trained hospitalists, all 8 items assessing doctor communication were rated higher during the postintervention period, but no result was statistically significant. Multivariate analyses were similar, with no items showing statistically significant differences between the preintervention and postintervention periods for either group.

Comparison of Top‐Box Patient‐Satisfaction Ratings by Discharge Hospitalist Participation
 No/Low TrainingHighly Trained
 Unadjusted AnalysisaAdjusted AnalysisUnadjusted AnalysisaAdjusted Analysis
 Preintervention, No. (%) [n=151156]Postintervention, No. (%) [n=6770]P ValueOR (95% CI)P ValuePreintervention, No. (%) [n=119122]Postintervention, No. (%) [n=115116]P ValueOR (95% CI)P Value
  • NOTE: Abbreviations: CI, confidence interval; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; OR, odds ratio.

  • Data represent the number and percentage of respondents giving highest rating (top box) for each question; denominators vary slightly due to missing data.

HCAHPS doctor‐ communication domain          
How often did doctors treat you with courtesy and respect?125 (83)61 (88)0.281.79 (0.82‐3.89)0.1499 (83)99 (85)0.651.33 (0.62‐2.91)0.46
How often did doctors listen carefully to you?116 (77)53 (76)0.861.08 (0.49‐2.38)0.1989 (74)92 (79)0.301.43 (0.76‐2.69)0.27
How often did doctors explain things in a way you could understand?115 (76)47 (68)0.240.59 (0.27‐1.28)0.1888 (74)90 (78)0.521.31 (0.68‐2.50)0.42
Press Ganey physician‐communication domain          
Skill of physician110 (71)52 (74)0.561.32 (0.78‐2.22)0.3179 (65)85 (73)0.161.45 (0.65‐3.27)0.37
Physician's concern for your questions and worries92 (60)41 (61)0.881.00 (0.59‐1.77)0.9965 (53)76 (66)0.061.71 (0.81‐3.60)0.16
How well physician kept you informed89 (59)42 (61)0.751.16 (0.64‐2.08)0.6269 (57)72 (63)0.341.29 (0.75‐2.20)0.35
Time physician spent with you83 (54)37 (53)0.920.87 (0.47‐1.61)0.6557 (47)64 (55)0.191.44 (0.64‐3.21)0.38
Friendliness/courtesy of physician116 (75)45 (66)0.180.72 (0.37‐1.38)0.3282 (67)91 (78)0.051.89 (0.97‐3.68)0.60
HCAHPS global ratings          
Overall rating of hospital109 (73)53 (75)0.631.37 (0.67‐2.81)0.3986 (71)90 (78)0.211.60 (0.73‐3.53)0.24

DISCUSSION

We found no significant improvement in patient satisfaction with doctor communication or overall rating of hospital care after implementation of a communication‐skills training program for hospitalists. There are several potential explanations for our results. First, though we used sound educational methods and attempted to replicate common clinical scenarios during simulation exercises, our program may not have resulted in improved communication behaviors during actual clinical care. We attempted to balance instructional methods that would result in behavioral change with a feasible investment of time and effort on the part of our learners (ie, practicing hospitalists). It is possible that additional time, feedback, and practice of communication skills would be necessary to change behaviors in the clinical setting. However, prior communication‐skills interventions have similarly struggled to show an impact on patient satisfaction.[13, 14] Second, we had incomplete participation in the program, with only 40% of hospitalists completing all 3 planned sessions. We encouraged all hospitalists, regardless of job type, to participate in the program. Participation rates were lower for 1‐year hospitalists compared with career hospitalists. The results of our analyses based on level of hospitalist participation in the training program, although not achieving statistical significance, suggest a greater effect of the program with higher degrees of participation.

Most important, the study was likely underpowered to detect a statistically significant difference in satisfaction results. Leaders were committed to providing communication‐skills training throughout our organization. We did not know the magnitude of potential improvement in satisfaction scores that might arise from our efforts, and therefore we did not conduct power calculations before designing and implementing the training program. Our HCAHPS composite doctor‐communication domain performance was 76% during the preintervention period and 79% during the postintervention period. Assuming an absolute 3% improvement is indeed possible, we would have needed >3000 patients in each period to have 80% power to detect a significant difference. Similarly, we would have needed >2000 patients during each period to have 80% power to detect an absolute 4% improvement in global rating of hospital care.

In an attempt to discern whether our favorable results were due to secular trends, we conducted post hoc analyses of HCAHPS nurse‐communication and hospital‐environment domains for the preintervention vs postintervention periods. Two of the 3 nurse‐communication items were rated lower during the postintervention period, but no result was statistically significant. Both hospital‐environment domain items were rated lower during the postintervention period, and 1 result was statistically significant (quiet at night). This post hoc evaluation lends additional support to the potential benefit of the communication‐skills training program.

The findings from this study represent an important issue for leaders attempting to improve quality performance within their organizations. What level of proof is needed before investing time and effort in implementing an intervention? With mounting pressure to improve performance, leaders are often left to make informed decisions based on data that fall short of scientifically rigorous evidence. Importantly, an increase in composite doctor‐communication ratings from 76% to 79% would translate into an improvement from the 25th percentile to 50th‐percentile performance in the fiscal‐year 2011 Press Ganey University Healthcare Consortium benchmark comparison (based on surveys received from September 1, 2010, to August 31, 2011).[15]

Our study has several limitations. First, we assessed an intervention on a single service in a single hospital. Generalizability may be limited, as hospital medicine groups, hospitals, and the patients they serve vary. Second, our intervention was based on a framework (ie, AIDET) that has face validity but has not undergone extensive study to confirm that the underlying constructs, and the behaviors related to them, are tightly linked to patient satisfaction. Third, as previously mentioned, we were likely underpowered to detect a significant improvement in satisfaction resulting from our intervention. Incomplete participation in the training program may have also limited the effect of our intervention. Finally, our comparisons by hospitalist level of participation were based on the discharging physician. Attribution of a patient response to a single physician is problematic because many patients encounter more than 1 hospitalist and 1 or more specialist physicians during their stay.

CONCLUSION

In summary, we found improvements in patient satisfaction with doctor communication, which were not statistically significant, after implementation of a communication‐skills training program for hospitalists. Larger studies are needed to assess whether a communication‐skills training program can truly improve patient satisfaction with doctor communication and overall hospital care.

Acknowledgments

The authors express their gratitude to the hospitalists involved in this program, especially Eric Schaefer, Nita Kulkarni, Stevie Mazyck, Rachel Cyrus, and Hiren Shah. The authors also thank Nicholas Christensen for assistance in data acquisition.

Disclosures: Nothing to report.

References
  1. Becker G, Kempf DE, Xander CJ, Momm F, Olschewski M, Blum HE. Four minutes for a patient, twenty seconds for a relative—an observational study at a university hospital. BMC Health Serv Res. 2010;10:94.
  2. O'Leary KJ, Liebovitz DM, Baker DW. How hospitalists spend their time: insights on efficiency and safety. J Hosp Med. 2006;1(2):8893.
  3. CAHPS: Surveys and Tools to Advance Patient‐Centered Care. Available at: http://cahps.ahrq.gov. Accessed July 12, 2012.
  4. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67(1):2737.
  5. Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients' perspective: an overview of the CAHPS Hospital Survey development process. Health Serv Res. 2005;40(6 part 2):19771995.
  6. US Department of Health and Human Services. Hospital Compare. Available at: http://hospitalcompare.hhs.gov/. Accessed November 5, 2012.
  7. Center for Medicare and Medicaid Services. Hospital Value Based Purchasing Program. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html?redirect=/hospital‐value‐based‐purchasing. Accessed August 1, 2012.
  8. Rao JK, Anderson LA, Inui TS, Frankel RM. Communication interventions make a difference in conversations between physicians and patients: a systematic review of the evidence. Med Care. 2007;45(4):340349.
  9. O'Leary KJ, Wayne DB, Landler MP, et al. Impact of localizing physicians to hospital units on nurse‐physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  10. Studer Group. Acknowledge, Introduce, Duration, Explanation and Thank You. Available at: http://www.studergroup.com/aidet. Accessed November 5, 2012.
  11. Kern DE, Thomas PA, Bass EB, Howard DM, eds. Curriculum Development for Medical Education: A Six‐Step Approach. Baltimore, MD: Johns Hopkins University Press; 1998.
  12. Vanderbilt University Medical Center and Studer Group. Building Patient Trust with AIDET®: Clinical Excellence with Patient Compliance Through Effective Communication. Gulf Breeze, FL: Fire Starter Publishing; 2008.
  13. Brown JB, Boles M, Mullooly JP, Levinson W. Effect of clinician communication skills training on patient satisfaction: a randomized, controlled trial. Ann Intern Med. 1999;131(11):822829.
  14. Fossli Jensen B, Gulbrandsen P, Dahl FA, Krupat E, Frankel RM, Finset A. Effectiveness of a short course in clinical communication skills for hospital doctors: results of a crossover randomized controlled trial (ISRCTN22153332). Patient Educ Couns. 2010;84(2):163169.
  15. Press Ganey HCAHPS Top Box and Rank Report, Fiscal Year 2011. Inpatient, University Healthcare Consortium Peer Group. South Bend, IN: Press Ganey Associates; 2011.
References
  1. Becker G, Kempf DE, Xander CJ, Momm F, Olschewski M, Blum HE. Four minutes for a patient, twenty seconds for a relative—an observational study at a university hospital. BMC Health Serv Res. 2010;10:94.
  2. O'Leary KJ, Liebovitz DM, Baker DW. How hospitalists spend their time: insights on efficiency and safety. J Hosp Med. 2006;1(2):8893.
  3. CAHPS: Surveys and Tools to Advance Patient‐Centered Care. Available at: http://cahps.ahrq.gov. Accessed July 12, 2012.
  4. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67(1):2737.
  5. Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients' perspective: an overview of the CAHPS Hospital Survey development process. Health Serv Res. 2005;40(6 part 2):19771995.
  6. US Department of Health and Human Services. Hospital Compare. Available at: http://hospitalcompare.hhs.gov/. Accessed November 5, 2012.
  7. Center for Medicare and Medicaid Services. Hospital Value Based Purchasing Program. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html?redirect=/hospital‐value‐based‐purchasing. Accessed August 1, 2012.
  8. Rao JK, Anderson LA, Inui TS, Frankel RM. Communication interventions make a difference in conversations between physicians and patients: a systematic review of the evidence. Med Care. 2007;45(4):340349.
  9. O'Leary KJ, Wayne DB, Landler MP, et al. Impact of localizing physicians to hospital units on nurse‐physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  10. Studer Group. Acknowledge, Introduce, Duration, Explanation and Thank You. Available at: http://www.studergroup.com/aidet. Accessed November 5, 2012.
  11. Kern DE, Thomas PA, Bass EB, Howard DM, eds. Curriculum Development for Medical Education: A Six‐Step Approach. Baltimore, MD: Johns Hopkins University Press; 1998.
  12. Vanderbilt University Medical Center and Studer Group. Building Patient Trust with AIDET®: Clinical Excellence with Patient Compliance Through Effective Communication. Gulf Breeze, FL: Fire Starter Publishing; 2008.
  13. Brown JB, Boles M, Mullooly JP, Levinson W. Effect of clinician communication skills training on patient satisfaction: a randomized, controlled trial. Ann Intern Med. 1999;131(11):822829.
  14. Fossli Jensen B, Gulbrandsen P, Dahl FA, Krupat E, Frankel RM, Finset A. Effectiveness of a short course in clinical communication skills for hospital doctors: results of a crossover randomized controlled trial (ISRCTN22153332). Patient Educ Couns. 2010;84(2):163169.
  15. Press Ganey HCAHPS Top Box and Rank Report, Fiscal Year 2011. Inpatient, University Healthcare Consortium Peer Group. South Bend, IN: Press Ganey Associates; 2011.
Issue
Journal of Hospital Medicine - 8(6)
Issue
Journal of Hospital Medicine - 8(6)
Page Number
315-320
Page Number
315-320
Article Type
Display Headline
Impact of hospitalist communication‐skills training on patient‐satisfaction scores
Display Headline
Impact of hospitalist communication‐skills training on patient‐satisfaction scores
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Kevin J. O'Leary, MD, MS, Associate Professor of Medicine, Division of Hospital Medicine, Northwestern University Feinberg School of Medicine, 211 E. Ontario St., Suite 211, Chicago, IL 60611; Telephone: 312–926‐5984; Fax: 312–926‐4588; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Impact of Penicillin Skin Testing

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
The impact of penicillin skin testing on clinical practice and antimicrobial stewardship

Self‐reported penicillin allergy is common and frequently limits the available antimicrobial agents to choose from. This often results in the use of more expensive, potentially more toxic, and possibly less efficacious agents.[1, 2]

For over 30 years, penicilloyl‐polylysine (PPL) penicillin skin testing (PST) was widely used to diagnose penicillin allergy with a negative predictive value (NPV) of about 97% to 99%.[3] After being off the market for 5 years, PPL PST was reapproved in 2009 as PRE‐PEN.[4] However, many clinicians still fail to utilize PST despite its simplicity and substantial clinical impact. The main purpose of this study was to describe the predictive value of PST and impact on antibiotic selection in a sample of hospitalized patients with a reported history of penicillin allergy.

METHODS

In 2010, PST was introduced as a quality‐improvement measure after approval and support from the chief of professional services and the medical staff executive committee at Vidant Medical Center, an 861‐bed tertiary care and teaching hospital. Our antimicrobial stewardship program is regularly contacted for approval of alternative therapies in penicillin allergic patients. The PST quality‐improvement intervention was implemented to avoid resorting to less appropriate therapies in these situations. Following approval by the University and Medical Center Institutional Review Board, we designed a 4‐month study to assess the impact of this ongoing quality improvement measure from March 2012 to July 2012.

Hospitalized patients of all ages with reported penicillin allergies were obtained from our antimicrobial stewardship database. Their charts were reviewed for demographics, antibiotic use, clinical infection, and allergic description. Deciding whether to alter antibiotic therapy to a ‐lactam regimen was based on microbiologic results, laboratory values, clinical infection, and history of immunoglobulin E (IgE)‐mediated reactions, as defined by the updated drug allergy practice parameters.[5] IgE‐mediated reactions included: (1) immediate urticaria, laryngeal edema, or hypotension; (2) anemia; and (3) fever, arthralgias, lymphadenopathy, and an urticarial rash after 7 to 21 days.[5, 6, 7] We defined anaphylaxis as the development of angioedema or hemodynamic instability within 1 hour of penicillin administration. A true negative reaction was a lack of an IgE‐mediated reaction to all the drug challenges.

Patients in the medical, surgical, labor, and delivery wards; intensive care units; and emergency department underwent testing. The ‐lactam agent used after a negative PST was recorded, and the patients were followed for 24 hours after transitioning their therapy to a ‐lactam regimen. Excluded subjects included those with (1) nonIgE‐mediated reactions, (2) skin conditions that can give false positive results, (3) medications that may interfere with anaphylactic therapy, (4) history of severe exfoliative reactions to ‐lactams, (5) anaphylaxis less than 4 weeks prior, (6) allergies to antibiotics other than penicillin, and (7) uncertain allergy history.

PST Reagents/Procedure

Our benzylpenicilloyl major determinant molecule, commercially produced as PPL, was purchased as a PRE‐PEN from ALK‐Abello, Round Rock, Texas. Penicillin G potassium, purchased from Pfizer, New York, New York, is the only commercially available minor determinant and can improve identification of penicillin allergy by up to 97%.[2] The PST panel also included histamine (positive control) and normal saline (negative control).

Skin Testing Procedure

An infectious diseases fellow (R.H.R. or B.K.) was supervised in preparing for potential anaphylaxis, applying the reagents and interpreting the results based on drug allergy practice parameters.[5] The preliminary epicutaneous prick/puncture test was performed with a lancet in subjects without prior anaphylaxis using full‐strength PPL and penicillin G potassium reagents. If there was no response within 15 minutes, which we defined as a lack of wheal formation 3 mm or greater than that of the negative control, 0.02 to 0.03 mL of each reagent was injected intradermally using a tuberculin syringe and examined for 15 minutes.[5] If there was no response, patients were then challenged with either a single oral dose of penicillin V potassium 250 mg or whichever oral penicillin agent they previously reported an allergy to. If no reaction was appreciated within 2 hours, their therapy was changed to a ‐lactam agent including penicillins, cephalosporins, and carbapenems for the remaining duration of therapy (Figure 1) An estimate of NPV was obtained after 24 hours follow‐up.

Figure 1
Antibiotics used prior to penicillin skin testing and β‐lactams transitioned to after a negative penicillin skin test. The upper graph illustrates the antibiotics used prior to penicillin skin testing in 146 patients over the 5‐month study period. The lower graph illustrates the β‐lactam antibiotics used after a negative penicillin skin test in the same patients. Abbreviations: Cipro, ciprofloxacin; Clinda, clindamycin; Dapto, daptomycin; Pip/Tazo, piperacillin‐tazobactam; Tobra, tobramycin; Trim/Sulfa, trimethoprim‐sulfamethoxazole.

Statistical Analysis

We designed a study to estimate whether the reapproved PST achieves an NPV of at least 95%.[3] We hypothesized that clinicians will be willing to utilize PST even if it has an NPV of slightly less than 98% compared to the current standard of treating patients without PST.[7] Assuming an equivalence margin of 3%, we estimated a sample size of 146 to achieve at least 82% power to test a hypothesis of NPV 95% using a 1‐sided Z test with a type‐I error rate of 5%.[8] Once the sample size of 146 subjects was reached, we stopped recruiting patients.

Sample characteristics of the subjects who underwent testing were summarized using descriptive statistics. Sample proportions were calculated to summarize categorical variables. Mean and standard deviation were calculated to summarize continuous variables. Cost analysis of antibiotic therapy was estimated from the Vidant Medical Center antibiotic pharmaceutical acquisition costs. Estimated cost of peripherally inserted central catheter (PICC) placement and removal as well as laboratory testing costs were obtained from our institution's medical billing department. Marketing costs of pharmacist drug calibration and nursing assessments with dressing changes were obtained from hospital‐affiliated outpatient antibiotic infusion companies.

RESULTS

A total of 4031 allergy histories were reviewed during the 5‐month study period to achieve the sample size of 146 patients (Table 1). Of those, 3885 were excluded (Figure 2). Common infections included pneumonias (26%) and urinary tract infections (20%) (Table 2) Only 1 subject had a positive reaction with hives, edema, and itching approximately 6 minutes after the agents were injected intradermally. The remaining 145 (99%) had negative reactions to the PST and oral challenge and were then successfully transitioned to a ‐lactam agent without any reaction at 24 hours, giving an NPV of 100%. Ten subjects were switched from intravenous to oral ‐lactam agents (Figure 1). Avoidance of PICC placement ($1,200) and removal ($65), dressing changes, weekly drug‐level testing, laboratory technician, and pharmaceutical drug calibration costs allowed for a healthcare reduction of $5,233 ($520/patient) based on the 146 patients studied. The total cost of therapy would have been $113,991 if the PST had not been performed. However, the cost of altered therapy following a negative PST was $81,180, a difference of $32,811 ($225/per patient) in a 5‐month period. The total estimated annual difference, including antibiotic alteration and associated drug‐costs, would be $82,000.

Prevalence of Reported Antimicrobial Drug Allergy in 4031 Charts Reviewed Over a 5‐Month Period
Antibiotic No. of Patients Reporting An Allergy % Per Total Charts Reviewed
Penicillin 428 10.6
Sulfonamide 271 6.7
Quinolone 108 2.7
Cephalosporin 81 2.0
Macrolide 65 1.6
Vancomycin 39 0.9
Tetracycline 20 0.5
Clindamycin 18 0.4
Metronidazole 9 0.2
Linezolid 2 0.05
Figure 2
Study design with inclusion and exclusion criteria. Abbreviations: IgE, immunoglobulin E.
Information Gathered During the Penicillin Skin Test Study
Categories No. of Patients (%)
  • NOTE: Abbreviations: IgE, Immunoglobulin E.

Time since last reported penicillin use
1 month1 year 6 (4)
25 years 39 (27)
610 years 23 (16)
>10 years 78 (53)
Reported IgE‐mediated reactions
Bronchospasm 23 (16)
Urticarial rash 100 (68)
Edema 32 (22)
Anaphylaxis 21 (14)
Age on admission, y
2050 28 (19)
5160 29 (20)
6170 41 (28)
7180 24 (16)
>80 24 (16)
Gender
Male 55 (40)
Female 88 (60)
Race
White 82 (56)
Black 61 (42)
Hispanic 3 (2)
Infections being treated
Bacteremia 7 (4.8)
Catheter‐related bloodstream infection 2 (1.4)
Empyema 1 (0.7)
Epidural abscess 2 (1.4)
Infective endocarditis 4 (2.7)
Intra‐abdominal infection 24 (16.4)
Meningitis 1 (0.7)
Neutropenic fever 1 (0.7)
Osteomyelitis 6 (4.1)
Pericardial effusion 1 (0.7)
Prosthetic joint infection 5 (3.4)
Pneumonia 40 (27.4)
Skin and soft‐tissue infection 20 (13.7)
Syphilis 3 (2.1)
Urinary tract infection 29 (19.7)

DISCUSSION

PST is the most rapid, sensitive, and cost‐effective modality for evaluating patients with immediate allergic reactions to penicillin. Over 90% of individuals with a true history of penicillin allergy have confirmed sensitivity with a PST, implying most patients who are skin tested negative are truly not allergic.[7, 9, 10, 11, 12] Our study shows that the reapproved PST with the PPL and penicillin G determinants continues to have a high NPV. A patient with a negative PST result is generally at a low risk of developing an immediate‐type hypersensitivity reaction to penicillin.[2, 11] PST frequently allowed for less expensive agents that would have been avoided due to a reported allergy. The estimated annual savings of $82,000 dollars from antibiotic alteration with successful transition to a ‐lactam agent after a negative PST illustrates its value, supports its validity, and makes this study novel.

Many ‐lactamase inhibitors (ie, piperacillin‐tazobactam), fourth generation cephalosporins (ie, cefepime), and carbapenems still remain costly. Despite this, we were still able to achieve a significant reduction in overall cost. In addition to financial benefits, PST allowed for the use of more appropriate agents with less potential adverse effects. Narrow‐spectrum, non‐lactam agents were sometimes altered to a broader‐spectrum ‐lactam agent. We also frequently tailored 2 agents to just 1 broad‐spectrum ‐lactam. This led to more patients being given broad‐spectrum agents after the PST (72 vs 89 patients). However, we were able to avoid using second‐line agents, such as aztreonam, vancomycin, linezolid, daptomycin, and tobramycin, in many patients with infections that are often best treated with penicillin‐based antibiotics (ie, syphilis, group B Streptococcus infections). With increasing incidence and recovery of multidrug‐resistant bacteria, PST may also allow use of potentially more effective antimicrobial agents.

A possible limitation is that our prevalence of a true penicillin allergy was <1%, whereas Bousquet et al. illustrate a higher prevalence of about 20%.[7] Although our prevalence may not be generalizable, Bousquet's study only assessed patients with allergies <5 years prior.

The introduction of PST into clinical practice will allow trained healthcare providers to prescribe cheaper, more appropriate, less toxic antimicrobial agents. The overall benefit of reintroducing penicillin agents when needed in the future is far more cost‐effective than what is described here. PST should become a standard of care when prescribing antibiotics to patients with a history of penicillin allergy. Medical providers should be aware of its utility, acquire training, and incorporate it into their practice.

Acknowledgment

Disclosures: Paul P. Cook, MD, has potential conflicts of interest with Gilead (investigator), Pfizer (investigator), Merck (investigator and speakers' bureau), and Forest (speakers' bureau). Neither he nor any of the other authors has received any sources of funding for this article. For the remaining authors, no conflicts were declared. The corresponding author, Ramzy Rimawi, MD, had full access to all of the data in the study and had final responsibility for the decision to submit for publication.

Files
References
  1. Jost BC, Wedner HJ, Bloomberg GR. Elective penicillin skin testing in a pediatric outpatient setting. Ann Allergy Asthma Immunol. 2006;97(6):807812.
  2. US Department of Veterans Affairs Web site. Benzypenicilloyl polylisine (PRE‐PEN) national drug monograph. May 2012. Available at: http://www.pbm.va.gov/DrugMonograph.aspx. Accessed September 1, 2012.
  3. Park M, James T. Diagnosis and management of penicillin allergy. Mayo Clin Proc. 2005;80(3):405410.
  4. PRE‐PEN penicillin skin test antigen. Available at: http://www.alk‐abello.com/us/products/pre‐pen/Pages/PREPEN.aspx. Accessed September 1, 2012.
  5. Solensky R, Khan DA, Bernstein IL, et al.; Joint Task Force on Practice Parameters; American Academy of Allergy, Asthma and Immunology; American College of Allergy, Asthma and Immunology; Joint Council of Allergy, Asthma and Immunology. Drug allergy: an updated practice parameter. Ann Allergy Asthma Immunol. 2010;105:259273.
  6. Parker CW. Immunochemical mechanisms in penicillin allergy. Fed Proc. 1965;24:5154.
  7. Bousquet PJ, Pipet A, Bousquet‐Rouanet L, et al. Oral challenges are needed in the diagnosis of beta‐lactam hypersensitivity. Clin Exp Allergy. 2008;38(1):185190.
  8. Chow SC, Shao J, Wang H. Sample Size Calculations in Clinical Research. New York, NY: Chapman 2003.
  9. Richter AG, Wong G, Goddard S, et al. Retrospective case series analysis of penicillin allergy testing in a UK specialist regional allergy clinic. J Clin Pathol. 2011;64:10141018.
  10. Stember RH. Prevalence of skin test reactivity in patients with convincing, vague and unnacceptible histories of penicillin allergy. Allergy Asthma Proc. 2005;26(1):5964.
  11. Valyasevi MA, Dellen RG. Frequency of systemic reactions to penicillin skin tests. Ann Allergy Asthma Immunol. 2000;85:363365.
  12. Lee CE, Zembower TR, Fotis MA, et al. The incidence of antimicrobial allergies in hospitalized patients. Arch Intern Med. 2000;160;28192822.
Article PDF
Issue
Journal of Hospital Medicine - 8(6)
Page Number
341-345
Sections
Files
Files
Article PDF
Article PDF

Self‐reported penicillin allergy is common and frequently limits the available antimicrobial agents to choose from. This often results in the use of more expensive, potentially more toxic, and possibly less efficacious agents.[1, 2]

For over 30 years, penicilloyl‐polylysine (PPL) penicillin skin testing (PST) was widely used to diagnose penicillin allergy with a negative predictive value (NPV) of about 97% to 99%.[3] After being off the market for 5 years, PPL PST was reapproved in 2009 as PRE‐PEN.[4] However, many clinicians still fail to utilize PST despite its simplicity and substantial clinical impact. The main purpose of this study was to describe the predictive value of PST and impact on antibiotic selection in a sample of hospitalized patients with a reported history of penicillin allergy.

METHODS

In 2010, PST was introduced as a quality‐improvement measure after approval and support from the chief of professional services and the medical staff executive committee at Vidant Medical Center, an 861‐bed tertiary care and teaching hospital. Our antimicrobial stewardship program is regularly contacted for approval of alternative therapies in penicillin allergic patients. The PST quality‐improvement intervention was implemented to avoid resorting to less appropriate therapies in these situations. Following approval by the University and Medical Center Institutional Review Board, we designed a 4‐month study to assess the impact of this ongoing quality improvement measure from March 2012 to July 2012.

Hospitalized patients of all ages with reported penicillin allergies were obtained from our antimicrobial stewardship database. Their charts were reviewed for demographics, antibiotic use, clinical infection, and allergic description. Deciding whether to alter antibiotic therapy to a ‐lactam regimen was based on microbiologic results, laboratory values, clinical infection, and history of immunoglobulin E (IgE)‐mediated reactions, as defined by the updated drug allergy practice parameters.[5] IgE‐mediated reactions included: (1) immediate urticaria, laryngeal edema, or hypotension; (2) anemia; and (3) fever, arthralgias, lymphadenopathy, and an urticarial rash after 7 to 21 days.[5, 6, 7] We defined anaphylaxis as the development of angioedema or hemodynamic instability within 1 hour of penicillin administration. A true negative reaction was a lack of an IgE‐mediated reaction to all the drug challenges.

Patients in the medical, surgical, labor, and delivery wards; intensive care units; and emergency department underwent testing. The ‐lactam agent used after a negative PST was recorded, and the patients were followed for 24 hours after transitioning their therapy to a ‐lactam regimen. Excluded subjects included those with (1) nonIgE‐mediated reactions, (2) skin conditions that can give false positive results, (3) medications that may interfere with anaphylactic therapy, (4) history of severe exfoliative reactions to ‐lactams, (5) anaphylaxis less than 4 weeks prior, (6) allergies to antibiotics other than penicillin, and (7) uncertain allergy history.

PST Reagents/Procedure

Our benzylpenicilloyl major determinant molecule, commercially produced as PPL, was purchased as a PRE‐PEN from ALK‐Abello, Round Rock, Texas. Penicillin G potassium, purchased from Pfizer, New York, New York, is the only commercially available minor determinant and can improve identification of penicillin allergy by up to 97%.[2] The PST panel also included histamine (positive control) and normal saline (negative control).

Skin Testing Procedure

An infectious diseases fellow (R.H.R. or B.K.) was supervised in preparing for potential anaphylaxis, applying the reagents and interpreting the results based on drug allergy practice parameters.[5] The preliminary epicutaneous prick/puncture test was performed with a lancet in subjects without prior anaphylaxis using full‐strength PPL and penicillin G potassium reagents. If there was no response within 15 minutes, which we defined as a lack of wheal formation 3 mm or greater than that of the negative control, 0.02 to 0.03 mL of each reagent was injected intradermally using a tuberculin syringe and examined for 15 minutes.[5] If there was no response, patients were then challenged with either a single oral dose of penicillin V potassium 250 mg or whichever oral penicillin agent they previously reported an allergy to. If no reaction was appreciated within 2 hours, their therapy was changed to a ‐lactam agent including penicillins, cephalosporins, and carbapenems for the remaining duration of therapy (Figure 1) An estimate of NPV was obtained after 24 hours follow‐up.

Figure 1
Antibiotics used prior to penicillin skin testing and β‐lactams transitioned to after a negative penicillin skin test. The upper graph illustrates the antibiotics used prior to penicillin skin testing in 146 patients over the 5‐month study period. The lower graph illustrates the β‐lactam antibiotics used after a negative penicillin skin test in the same patients. Abbreviations: Cipro, ciprofloxacin; Clinda, clindamycin; Dapto, daptomycin; Pip/Tazo, piperacillin‐tazobactam; Tobra, tobramycin; Trim/Sulfa, trimethoprim‐sulfamethoxazole.

Statistical Analysis

We designed a study to estimate whether the reapproved PST achieves an NPV of at least 95%.[3] We hypothesized that clinicians will be willing to utilize PST even if it has an NPV of slightly less than 98% compared to the current standard of treating patients without PST.[7] Assuming an equivalence margin of 3%, we estimated a sample size of 146 to achieve at least 82% power to test a hypothesis of NPV 95% using a 1‐sided Z test with a type‐I error rate of 5%.[8] Once the sample size of 146 subjects was reached, we stopped recruiting patients.

Sample characteristics of the subjects who underwent testing were summarized using descriptive statistics. Sample proportions were calculated to summarize categorical variables. Mean and standard deviation were calculated to summarize continuous variables. Cost analysis of antibiotic therapy was estimated from the Vidant Medical Center antibiotic pharmaceutical acquisition costs. Estimated cost of peripherally inserted central catheter (PICC) placement and removal as well as laboratory testing costs were obtained from our institution's medical billing department. Marketing costs of pharmacist drug calibration and nursing assessments with dressing changes were obtained from hospital‐affiliated outpatient antibiotic infusion companies.

RESULTS

A total of 4031 allergy histories were reviewed during the 5‐month study period to achieve the sample size of 146 patients (Table 1). Of those, 3885 were excluded (Figure 2). Common infections included pneumonias (26%) and urinary tract infections (20%) (Table 2) Only 1 subject had a positive reaction with hives, edema, and itching approximately 6 minutes after the agents were injected intradermally. The remaining 145 (99%) had negative reactions to the PST and oral challenge and were then successfully transitioned to a ‐lactam agent without any reaction at 24 hours, giving an NPV of 100%. Ten subjects were switched from intravenous to oral ‐lactam agents (Figure 1). Avoidance of PICC placement ($1,200) and removal ($65), dressing changes, weekly drug‐level testing, laboratory technician, and pharmaceutical drug calibration costs allowed for a healthcare reduction of $5,233 ($520/patient) based on the 146 patients studied. The total cost of therapy would have been $113,991 if the PST had not been performed. However, the cost of altered therapy following a negative PST was $81,180, a difference of $32,811 ($225/per patient) in a 5‐month period. The total estimated annual difference, including antibiotic alteration and associated drug‐costs, would be $82,000.

Prevalence of Reported Antimicrobial Drug Allergy in 4031 Charts Reviewed Over a 5‐Month Period
Antibiotic No. of Patients Reporting An Allergy % Per Total Charts Reviewed
Penicillin 428 10.6
Sulfonamide 271 6.7
Quinolone 108 2.7
Cephalosporin 81 2.0
Macrolide 65 1.6
Vancomycin 39 0.9
Tetracycline 20 0.5
Clindamycin 18 0.4
Metronidazole 9 0.2
Linezolid 2 0.05
Figure 2
Study design with inclusion and exclusion criteria. Abbreviations: IgE, immunoglobulin E.
Information Gathered During the Penicillin Skin Test Study
Categories No. of Patients (%)
  • NOTE: Abbreviations: IgE, Immunoglobulin E.

Time since last reported penicillin use
1 month1 year 6 (4)
25 years 39 (27)
610 years 23 (16)
>10 years 78 (53)
Reported IgE‐mediated reactions
Bronchospasm 23 (16)
Urticarial rash 100 (68)
Edema 32 (22)
Anaphylaxis 21 (14)
Age on admission, y
2050 28 (19)
5160 29 (20)
6170 41 (28)
7180 24 (16)
>80 24 (16)
Gender
Male 55 (40)
Female 88 (60)
Race
White 82 (56)
Black 61 (42)
Hispanic 3 (2)
Infections being treated
Bacteremia 7 (4.8)
Catheter‐related bloodstream infection 2 (1.4)
Empyema 1 (0.7)
Epidural abscess 2 (1.4)
Infective endocarditis 4 (2.7)
Intra‐abdominal infection 24 (16.4)
Meningitis 1 (0.7)
Neutropenic fever 1 (0.7)
Osteomyelitis 6 (4.1)
Pericardial effusion 1 (0.7)
Prosthetic joint infection 5 (3.4)
Pneumonia 40 (27.4)
Skin and soft‐tissue infection 20 (13.7)
Syphilis 3 (2.1)
Urinary tract infection 29 (19.7)

DISCUSSION

PST is the most rapid, sensitive, and cost‐effective modality for evaluating patients with immediate allergic reactions to penicillin. Over 90% of individuals with a true history of penicillin allergy have confirmed sensitivity with a PST, implying most patients who are skin tested negative are truly not allergic.[7, 9, 10, 11, 12] Our study shows that the reapproved PST with the PPL and penicillin G determinants continues to have a high NPV. A patient with a negative PST result is generally at a low risk of developing an immediate‐type hypersensitivity reaction to penicillin.[2, 11] PST frequently allowed for less expensive agents that would have been avoided due to a reported allergy. The estimated annual savings of $82,000 dollars from antibiotic alteration with successful transition to a ‐lactam agent after a negative PST illustrates its value, supports its validity, and makes this study novel.

Many ‐lactamase inhibitors (ie, piperacillin‐tazobactam), fourth generation cephalosporins (ie, cefepime), and carbapenems still remain costly. Despite this, we were still able to achieve a significant reduction in overall cost. In addition to financial benefits, PST allowed for the use of more appropriate agents with less potential adverse effects. Narrow‐spectrum, non‐lactam agents were sometimes altered to a broader‐spectrum ‐lactam agent. We also frequently tailored 2 agents to just 1 broad‐spectrum ‐lactam. This led to more patients being given broad‐spectrum agents after the PST (72 vs 89 patients). However, we were able to avoid using second‐line agents, such as aztreonam, vancomycin, linezolid, daptomycin, and tobramycin, in many patients with infections that are often best treated with penicillin‐based antibiotics (ie, syphilis, group B Streptococcus infections). With increasing incidence and recovery of multidrug‐resistant bacteria, PST may also allow use of potentially more effective antimicrobial agents.

A possible limitation is that our prevalence of a true penicillin allergy was <1%, whereas Bousquet et al. illustrate a higher prevalence of about 20%.[7] Although our prevalence may not be generalizable, Bousquet's study only assessed patients with allergies <5 years prior.

The introduction of PST into clinical practice will allow trained healthcare providers to prescribe cheaper, more appropriate, less toxic antimicrobial agents. The overall benefit of reintroducing penicillin agents when needed in the future is far more cost‐effective than what is described here. PST should become a standard of care when prescribing antibiotics to patients with a history of penicillin allergy. Medical providers should be aware of its utility, acquire training, and incorporate it into their practice.

Acknowledgment

Disclosures: Paul P. Cook, MD, has potential conflicts of interest with Gilead (investigator), Pfizer (investigator), Merck (investigator and speakers' bureau), and Forest (speakers' bureau). Neither he nor any of the other authors has received any sources of funding for this article. For the remaining authors, no conflicts were declared. The corresponding author, Ramzy Rimawi, MD, had full access to all of the data in the study and had final responsibility for the decision to submit for publication.

Self‐reported penicillin allergy is common and frequently limits the available antimicrobial agents to choose from. This often results in the use of more expensive, potentially more toxic, and possibly less efficacious agents.[1, 2]

For over 30 years, penicilloyl‐polylysine (PPL) penicillin skin testing (PST) was widely used to diagnose penicillin allergy with a negative predictive value (NPV) of about 97% to 99%.[3] After being off the market for 5 years, PPL PST was reapproved in 2009 as PRE‐PEN.[4] However, many clinicians still fail to utilize PST despite its simplicity and substantial clinical impact. The main purpose of this study was to describe the predictive value of PST and impact on antibiotic selection in a sample of hospitalized patients with a reported history of penicillin allergy.

METHODS

In 2010, PST was introduced as a quality‐improvement measure after approval and support from the chief of professional services and the medical staff executive committee at Vidant Medical Center, an 861‐bed tertiary care and teaching hospital. Our antimicrobial stewardship program is regularly contacted for approval of alternative therapies in penicillin allergic patients. The PST quality‐improvement intervention was implemented to avoid resorting to less appropriate therapies in these situations. Following approval by the University and Medical Center Institutional Review Board, we designed a 4‐month study to assess the impact of this ongoing quality improvement measure from March 2012 to July 2012.

Hospitalized patients of all ages with reported penicillin allergies were obtained from our antimicrobial stewardship database. Their charts were reviewed for demographics, antibiotic use, clinical infection, and allergic description. Deciding whether to alter antibiotic therapy to a ‐lactam regimen was based on microbiologic results, laboratory values, clinical infection, and history of immunoglobulin E (IgE)‐mediated reactions, as defined by the updated drug allergy practice parameters.[5] IgE‐mediated reactions included: (1) immediate urticaria, laryngeal edema, or hypotension; (2) anemia; and (3) fever, arthralgias, lymphadenopathy, and an urticarial rash after 7 to 21 days.[5, 6, 7] We defined anaphylaxis as the development of angioedema or hemodynamic instability within 1 hour of penicillin administration. A true negative reaction was a lack of an IgE‐mediated reaction to all the drug challenges.

Patients in the medical, surgical, labor, and delivery wards; intensive care units; and emergency department underwent testing. The ‐lactam agent used after a negative PST was recorded, and the patients were followed for 24 hours after transitioning their therapy to a ‐lactam regimen. Excluded subjects included those with (1) nonIgE‐mediated reactions, (2) skin conditions that can give false positive results, (3) medications that may interfere with anaphylactic therapy, (4) history of severe exfoliative reactions to ‐lactams, (5) anaphylaxis less than 4 weeks prior, (6) allergies to antibiotics other than penicillin, and (7) uncertain allergy history.

PST Reagents/Procedure

Our benzylpenicilloyl major determinant molecule, commercially produced as PPL, was purchased as a PRE‐PEN from ALK‐Abello, Round Rock, Texas. Penicillin G potassium, purchased from Pfizer, New York, New York, is the only commercially available minor determinant and can improve identification of penicillin allergy by up to 97%.[2] The PST panel also included histamine (positive control) and normal saline (negative control).

Skin Testing Procedure

An infectious diseases fellow (R.H.R. or B.K.) was supervised in preparing for potential anaphylaxis, applying the reagents and interpreting the results based on drug allergy practice parameters.[5] The preliminary epicutaneous prick/puncture test was performed with a lancet in subjects without prior anaphylaxis using full‐strength PPL and penicillin G potassium reagents. If there was no response within 15 minutes, which we defined as a lack of wheal formation 3 mm or greater than that of the negative control, 0.02 to 0.03 mL of each reagent was injected intradermally using a tuberculin syringe and examined for 15 minutes.[5] If there was no response, patients were then challenged with either a single oral dose of penicillin V potassium 250 mg or whichever oral penicillin agent they previously reported an allergy to. If no reaction was appreciated within 2 hours, their therapy was changed to a ‐lactam agent including penicillins, cephalosporins, and carbapenems for the remaining duration of therapy (Figure 1) An estimate of NPV was obtained after 24 hours follow‐up.

Figure 1
Antibiotics used prior to penicillin skin testing and β‐lactams transitioned to after a negative penicillin skin test. The upper graph illustrates the antibiotics used prior to penicillin skin testing in 146 patients over the 5‐month study period. The lower graph illustrates the β‐lactam antibiotics used after a negative penicillin skin test in the same patients. Abbreviations: Cipro, ciprofloxacin; Clinda, clindamycin; Dapto, daptomycin; Pip/Tazo, piperacillin‐tazobactam; Tobra, tobramycin; Trim/Sulfa, trimethoprim‐sulfamethoxazole.

Statistical Analysis

We designed a study to estimate whether the reapproved PST achieves an NPV of at least 95%.[3] We hypothesized that clinicians will be willing to utilize PST even if it has an NPV of slightly less than 98% compared to the current standard of treating patients without PST.[7] Assuming an equivalence margin of 3%, we estimated a sample size of 146 to achieve at least 82% power to test a hypothesis of NPV 95% using a 1‐sided Z test with a type‐I error rate of 5%.[8] Once the sample size of 146 subjects was reached, we stopped recruiting patients.

Sample characteristics of the subjects who underwent testing were summarized using descriptive statistics. Sample proportions were calculated to summarize categorical variables. Mean and standard deviation were calculated to summarize continuous variables. Cost analysis of antibiotic therapy was estimated from the Vidant Medical Center antibiotic pharmaceutical acquisition costs. Estimated cost of peripherally inserted central catheter (PICC) placement and removal as well as laboratory testing costs were obtained from our institution's medical billing department. Marketing costs of pharmacist drug calibration and nursing assessments with dressing changes were obtained from hospital‐affiliated outpatient antibiotic infusion companies.

RESULTS

A total of 4031 allergy histories were reviewed during the 5‐month study period to achieve the sample size of 146 patients (Table 1). Of those, 3885 were excluded (Figure 2). Common infections included pneumonias (26%) and urinary tract infections (20%) (Table 2) Only 1 subject had a positive reaction with hives, edema, and itching approximately 6 minutes after the agents were injected intradermally. The remaining 145 (99%) had negative reactions to the PST and oral challenge and were then successfully transitioned to a ‐lactam agent without any reaction at 24 hours, giving an NPV of 100%. Ten subjects were switched from intravenous to oral ‐lactam agents (Figure 1). Avoidance of PICC placement ($1,200) and removal ($65), dressing changes, weekly drug‐level testing, laboratory technician, and pharmaceutical drug calibration costs allowed for a healthcare reduction of $5,233 ($520/patient) based on the 146 patients studied. The total cost of therapy would have been $113,991 if the PST had not been performed. However, the cost of altered therapy following a negative PST was $81,180, a difference of $32,811 ($225/per patient) in a 5‐month period. The total estimated annual difference, including antibiotic alteration and associated drug‐costs, would be $82,000.

Prevalence of Reported Antimicrobial Drug Allergy in 4031 Charts Reviewed Over a 5‐Month Period
Antibiotic No. of Patients Reporting An Allergy % Per Total Charts Reviewed
Penicillin 428 10.6
Sulfonamide 271 6.7
Quinolone 108 2.7
Cephalosporin 81 2.0
Macrolide 65 1.6
Vancomycin 39 0.9
Tetracycline 20 0.5
Clindamycin 18 0.4
Metronidazole 9 0.2
Linezolid 2 0.05
Figure 2
Study design with inclusion and exclusion criteria. Abbreviations: IgE, immunoglobulin E.
Information Gathered During the Penicillin Skin Test Study
Categories No. of Patients (%)
  • NOTE: Abbreviations: IgE, Immunoglobulin E.

Time since last reported penicillin use
1 month1 year 6 (4)
25 years 39 (27)
610 years 23 (16)
>10 years 78 (53)
Reported IgE‐mediated reactions
Bronchospasm 23 (16)
Urticarial rash 100 (68)
Edema 32 (22)
Anaphylaxis 21 (14)
Age on admission, y
2050 28 (19)
5160 29 (20)
6170 41 (28)
7180 24 (16)
>80 24 (16)
Gender
Male 55 (40)
Female 88 (60)
Race
White 82 (56)
Black 61 (42)
Hispanic 3 (2)
Infections being treated
Bacteremia 7 (4.8)
Catheter‐related bloodstream infection 2 (1.4)
Empyema 1 (0.7)
Epidural abscess 2 (1.4)
Infective endocarditis 4 (2.7)
Intra‐abdominal infection 24 (16.4)
Meningitis 1 (0.7)
Neutropenic fever 1 (0.7)
Osteomyelitis 6 (4.1)
Pericardial effusion 1 (0.7)
Prosthetic joint infection 5 (3.4)
Pneumonia 40 (27.4)
Skin and soft‐tissue infection 20 (13.7)
Syphilis 3 (2.1)
Urinary tract infection 29 (19.7)

DISCUSSION

PST is the most rapid, sensitive, and cost‐effective modality for evaluating patients with immediate allergic reactions to penicillin. Over 90% of individuals with a true history of penicillin allergy have confirmed sensitivity with a PST, implying most patients who are skin tested negative are truly not allergic.[7, 9, 10, 11, 12] Our study shows that the reapproved PST with the PPL and penicillin G determinants continues to have a high NPV. A patient with a negative PST result is generally at a low risk of developing an immediate‐type hypersensitivity reaction to penicillin.[2, 11] PST frequently allowed for less expensive agents that would have been avoided due to a reported allergy. The estimated annual savings of $82,000 dollars from antibiotic alteration with successful transition to a ‐lactam agent after a negative PST illustrates its value, supports its validity, and makes this study novel.

Many ‐lactamase inhibitors (ie, piperacillin‐tazobactam), fourth generation cephalosporins (ie, cefepime), and carbapenems still remain costly. Despite this, we were still able to achieve a significant reduction in overall cost. In addition to financial benefits, PST allowed for the use of more appropriate agents with less potential adverse effects. Narrow‐spectrum, non‐lactam agents were sometimes altered to a broader‐spectrum ‐lactam agent. We also frequently tailored 2 agents to just 1 broad‐spectrum ‐lactam. This led to more patients being given broad‐spectrum agents after the PST (72 vs 89 patients). However, we were able to avoid using second‐line agents, such as aztreonam, vancomycin, linezolid, daptomycin, and tobramycin, in many patients with infections that are often best treated with penicillin‐based antibiotics (ie, syphilis, group B Streptococcus infections). With increasing incidence and recovery of multidrug‐resistant bacteria, PST may also allow use of potentially more effective antimicrobial agents.

A possible limitation is that our prevalence of a true penicillin allergy was <1%, whereas Bousquet et al. illustrate a higher prevalence of about 20%.[7] Although our prevalence may not be generalizable, Bousquet's study only assessed patients with allergies <5 years prior.

The introduction of PST into clinical practice will allow trained healthcare providers to prescribe cheaper, more appropriate, less toxic antimicrobial agents. The overall benefit of reintroducing penicillin agents when needed in the future is far more cost‐effective than what is described here. PST should become a standard of care when prescribing antibiotics to patients with a history of penicillin allergy. Medical providers should be aware of its utility, acquire training, and incorporate it into their practice.

Acknowledgment

Disclosures: Paul P. Cook, MD, has potential conflicts of interest with Gilead (investigator), Pfizer (investigator), Merck (investigator and speakers' bureau), and Forest (speakers' bureau). Neither he nor any of the other authors has received any sources of funding for this article. For the remaining authors, no conflicts were declared. The corresponding author, Ramzy Rimawi, MD, had full access to all of the data in the study and had final responsibility for the decision to submit for publication.

References
  1. Jost BC, Wedner HJ, Bloomberg GR. Elective penicillin skin testing in a pediatric outpatient setting. Ann Allergy Asthma Immunol. 2006;97(6):807812.
  2. US Department of Veterans Affairs Web site. Benzypenicilloyl polylisine (PRE‐PEN) national drug monograph. May 2012. Available at: http://www.pbm.va.gov/DrugMonograph.aspx. Accessed September 1, 2012.
  3. Park M, James T. Diagnosis and management of penicillin allergy. Mayo Clin Proc. 2005;80(3):405410.
  4. PRE‐PEN penicillin skin test antigen. Available at: http://www.alk‐abello.com/us/products/pre‐pen/Pages/PREPEN.aspx. Accessed September 1, 2012.
  5. Solensky R, Khan DA, Bernstein IL, et al.; Joint Task Force on Practice Parameters; American Academy of Allergy, Asthma and Immunology; American College of Allergy, Asthma and Immunology; Joint Council of Allergy, Asthma and Immunology. Drug allergy: an updated practice parameter. Ann Allergy Asthma Immunol. 2010;105:259273.
  6. Parker CW. Immunochemical mechanisms in penicillin allergy. Fed Proc. 1965;24:5154.
  7. Bousquet PJ, Pipet A, Bousquet‐Rouanet L, et al. Oral challenges are needed in the diagnosis of beta‐lactam hypersensitivity. Clin Exp Allergy. 2008;38(1):185190.
  8. Chow SC, Shao J, Wang H. Sample Size Calculations in Clinical Research. New York, NY: Chapman 2003.
  9. Richter AG, Wong G, Goddard S, et al. Retrospective case series analysis of penicillin allergy testing in a UK specialist regional allergy clinic. J Clin Pathol. 2011;64:10141018.
  10. Stember RH. Prevalence of skin test reactivity in patients with convincing, vague and unnacceptible histories of penicillin allergy. Allergy Asthma Proc. 2005;26(1):5964.
  11. Valyasevi MA, Dellen RG. Frequency of systemic reactions to penicillin skin tests. Ann Allergy Asthma Immunol. 2000;85:363365.
  12. Lee CE, Zembower TR, Fotis MA, et al. The incidence of antimicrobial allergies in hospitalized patients. Arch Intern Med. 2000;160;28192822.
References
  1. Jost BC, Wedner HJ, Bloomberg GR. Elective penicillin skin testing in a pediatric outpatient setting. Ann Allergy Asthma Immunol. 2006;97(6):807812.
  2. US Department of Veterans Affairs Web site. Benzypenicilloyl polylisine (PRE‐PEN) national drug monograph. May 2012. Available at: http://www.pbm.va.gov/DrugMonograph.aspx. Accessed September 1, 2012.
  3. Park M, James T. Diagnosis and management of penicillin allergy. Mayo Clin Proc. 2005;80(3):405410.
  4. PRE‐PEN penicillin skin test antigen. Available at: http://www.alk‐abello.com/us/products/pre‐pen/Pages/PREPEN.aspx. Accessed September 1, 2012.
  5. Solensky R, Khan DA, Bernstein IL, et al.; Joint Task Force on Practice Parameters; American Academy of Allergy, Asthma and Immunology; American College of Allergy, Asthma and Immunology; Joint Council of Allergy, Asthma and Immunology. Drug allergy: an updated practice parameter. Ann Allergy Asthma Immunol. 2010;105:259273.
  6. Parker CW. Immunochemical mechanisms in penicillin allergy. Fed Proc. 1965;24:5154.
  7. Bousquet PJ, Pipet A, Bousquet‐Rouanet L, et al. Oral challenges are needed in the diagnosis of beta‐lactam hypersensitivity. Clin Exp Allergy. 2008;38(1):185190.
  8. Chow SC, Shao J, Wang H. Sample Size Calculations in Clinical Research. New York, NY: Chapman 2003.
  9. Richter AG, Wong G, Goddard S, et al. Retrospective case series analysis of penicillin allergy testing in a UK specialist regional allergy clinic. J Clin Pathol. 2011;64:10141018.
  10. Stember RH. Prevalence of skin test reactivity in patients with convincing, vague and unnacceptible histories of penicillin allergy. Allergy Asthma Proc. 2005;26(1):5964.
  11. Valyasevi MA, Dellen RG. Frequency of systemic reactions to penicillin skin tests. Ann Allergy Asthma Immunol. 2000;85:363365.
  12. Lee CE, Zembower TR, Fotis MA, et al. The incidence of antimicrobial allergies in hospitalized patients. Arch Intern Med. 2000;160;28192822.
Issue
Journal of Hospital Medicine - 8(6)
Issue
Journal of Hospital Medicine - 8(6)
Page Number
341-345
Page Number
341-345
Article Type
Display Headline
The impact of penicillin skin testing on clinical practice and antimicrobial stewardship
Display Headline
The impact of penicillin skin testing on clinical practice and antimicrobial stewardship
Sections
Article Source
Copyright © 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Ramzy H. Rimawi, MD, Brody School of Medicine–East Carolina University, Doctor's Park 6A, Mail Stop 715, Greenville, NC 27834; Telephone: 252–744‐4500; Fax: 252–744‐3472; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Duloxetine reduces chemo-induced neuropathy

Article Type
Changed
Mon, 01/07/2019 - 11:40
Display Headline
Duloxetine reduces chemo-induced neuropathy

A 5-week course of daily oral duloxetine reduced pain and improved function and quality of life for patients with chemotherapy-induced peripheral neuropathy, according to a report in the April 3 issue of JAMA.

Duloxetine’s effects on chemotherapy-induced peripheral neuropathic pain were measured in a randomized, double-blind, placebo-controlled, crossover clinical trial involving 231 cancer patients aged 25 years and older who had been treated with platinum or taxane agents. Study subjects were approximately twice as likely to experience a 30% reduction in pain while taking duloxetine than while taking placebo and were 2.4 times more likely to experience a 50% reduction in pain, said Ellen M. Lavoie Smith, Ph.D., of the University of Michigan School of Nursing, Ann Arbor, and her associates. The data were presented at the 2012 annual meeting of the American Society of Clinical Oncology.

Patients also reported better daily functioning with duloxetine, compared with placebo, including improved scores on measures assessing numbness, tingling, or discomfort of the hands or feet; tinnitus or difficulty hearing; joint pain; muscle cramps and weakness; and difficulty walking, dressing, or feeling small objects in the hands. Pain-related quality of life also improved to a greater degree with duloxetine (mean change of 2.44 points out of 44 possible points on the Functional Assessment of Cancer Treatment, Gynecologic Oncology Group Neurotoxicity subscale) than with placebo (mean change of 0.87 points).

There were no hematologic or grade 4 adverse events. Mild adverse events were reported by 16% during duloxetine treatment and 27% during placebo treatment, and moderate adverse effects were reported by 7% and 3%, respectively. These included fatigue, insomnia, and nausea in both patient groups, the investigators said (JAMA 2013;309:1359-67).

This study was supported by the National Cancer Institute and the Alliance Statistics and Data Center. Study drugs and placebo were supplied by Eli Lilly. Dr. Smith reported no conflicts of interest, and one of her associates reported ties to Genentech.

[email protected]

Author and Disclosure Information

Publications
Topics
Legacy Keywords
duloxetine, pain, chemotherapy, peripheral neuropathy, JAMA, cancer, Ellen M. Lavoie Smith, University of Michigan School of Nursing, American Society of Clinical Oncology, ASCO
Author and Disclosure Information

Author and Disclosure Information

A 5-week course of daily oral duloxetine reduced pain and improved function and quality of life for patients with chemotherapy-induced peripheral neuropathy, according to a report in the April 3 issue of JAMA.

Duloxetine’s effects on chemotherapy-induced peripheral neuropathic pain were measured in a randomized, double-blind, placebo-controlled, crossover clinical trial involving 231 cancer patients aged 25 years and older who had been treated with platinum or taxane agents. Study subjects were approximately twice as likely to experience a 30% reduction in pain while taking duloxetine than while taking placebo and were 2.4 times more likely to experience a 50% reduction in pain, said Ellen M. Lavoie Smith, Ph.D., of the University of Michigan School of Nursing, Ann Arbor, and her associates. The data were presented at the 2012 annual meeting of the American Society of Clinical Oncology.

Patients also reported better daily functioning with duloxetine, compared with placebo, including improved scores on measures assessing numbness, tingling, or discomfort of the hands or feet; tinnitus or difficulty hearing; joint pain; muscle cramps and weakness; and difficulty walking, dressing, or feeling small objects in the hands. Pain-related quality of life also improved to a greater degree with duloxetine (mean change of 2.44 points out of 44 possible points on the Functional Assessment of Cancer Treatment, Gynecologic Oncology Group Neurotoxicity subscale) than with placebo (mean change of 0.87 points).

There were no hematologic or grade 4 adverse events. Mild adverse events were reported by 16% during duloxetine treatment and 27% during placebo treatment, and moderate adverse effects were reported by 7% and 3%, respectively. These included fatigue, insomnia, and nausea in both patient groups, the investigators said (JAMA 2013;309:1359-67).

This study was supported by the National Cancer Institute and the Alliance Statistics and Data Center. Study drugs and placebo were supplied by Eli Lilly. Dr. Smith reported no conflicts of interest, and one of her associates reported ties to Genentech.

[email protected]

A 5-week course of daily oral duloxetine reduced pain and improved function and quality of life for patients with chemotherapy-induced peripheral neuropathy, according to a report in the April 3 issue of JAMA.

Duloxetine’s effects on chemotherapy-induced peripheral neuropathic pain were measured in a randomized, double-blind, placebo-controlled, crossover clinical trial involving 231 cancer patients aged 25 years and older who had been treated with platinum or taxane agents. Study subjects were approximately twice as likely to experience a 30% reduction in pain while taking duloxetine than while taking placebo and were 2.4 times more likely to experience a 50% reduction in pain, said Ellen M. Lavoie Smith, Ph.D., of the University of Michigan School of Nursing, Ann Arbor, and her associates. The data were presented at the 2012 annual meeting of the American Society of Clinical Oncology.

Patients also reported better daily functioning with duloxetine, compared with placebo, including improved scores on measures assessing numbness, tingling, or discomfort of the hands or feet; tinnitus or difficulty hearing; joint pain; muscle cramps and weakness; and difficulty walking, dressing, or feeling small objects in the hands. Pain-related quality of life also improved to a greater degree with duloxetine (mean change of 2.44 points out of 44 possible points on the Functional Assessment of Cancer Treatment, Gynecologic Oncology Group Neurotoxicity subscale) than with placebo (mean change of 0.87 points).

There were no hematologic or grade 4 adverse events. Mild adverse events were reported by 16% during duloxetine treatment and 27% during placebo treatment, and moderate adverse effects were reported by 7% and 3%, respectively. These included fatigue, insomnia, and nausea in both patient groups, the investigators said (JAMA 2013;309:1359-67).

This study was supported by the National Cancer Institute and the Alliance Statistics and Data Center. Study drugs and placebo were supplied by Eli Lilly. Dr. Smith reported no conflicts of interest, and one of her associates reported ties to Genentech.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Duloxetine reduces chemo-induced neuropathy
Display Headline
Duloxetine reduces chemo-induced neuropathy
Legacy Keywords
duloxetine, pain, chemotherapy, peripheral neuropathy, JAMA, cancer, Ellen M. Lavoie Smith, University of Michigan School of Nursing, American Society of Clinical Oncology, ASCO
Legacy Keywords
duloxetine, pain, chemotherapy, peripheral neuropathy, JAMA, cancer, Ellen M. Lavoie Smith, University of Michigan School of Nursing, American Society of Clinical Oncology, ASCO
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Major finding: Study subjects were 2.4 times more likely to experience a 50% pain reduction while taking duloxetine than while taking placebo.

Data source: A randomized, double-blind, placebo-controlled crossover trial involving 231 cancer patients.

Disclosures: This study was supported by the National Cancer Institute and the Alliance Statistics and Data Center. Study drugs and placebo were supplied by Eli Lilly. Dr. Smith reported no conflicts of interest, and one of her associates reported ties to Genentech.

How to avoid opioid misuse

Article Type
Changed
Mon, 01/14/2019 - 10:46
Display Headline
How to avoid opioid misuse

Opioids have become the standard of care for numerous chronic pain complaints and are the most misused drugs in the United States.1 The result: A public health issue with challenges for patients with pain, clinicians treating pain, and the broader community. (See “Opioid analgesic misuse: Scope of the problem,” below1-7).

Ultimately, clinicians are faced with trying to provide adequate pain relief while predicting which patients are at risk for misuse. An expert panel commissioned by the American Pain Society and American Academy of Pain Medicine (APS/AAPM) reviewed the evidence and issued clinical guidelines for long-term opioid therapy in chronic noncancer pain.8 Using the APS/AAPM framework, this article discusses how to:

  • identify the risk of problem use in the individual patient

  • monitor opioid therapy to ensure safe prescribing

  • determine when to terminate opioid therapy in cases of opioid misuse.  


OPIOID ANALGESIC MISUSE: SCOPE OF THE PROBLEM

Americans consume an estimated 80% of the global supply of prescription opioids.2 From 1997 to 2007, average sales of opioid analgesics per person increased 402%.3 Because opioid analgesics are increasingly available in the community,4 the prevalence of opioid misuse has followed suit. Opioid analgesics have become the most misused drug class in the United States—second only to marijuana among all illicit substances.1

Nonmedical users of opioid analgesics numbered 4.5 million in 2011, and 1.8 million opioid analgesic users met diagnostic criteria for dependence or abuse.1 In 2007, the costs to society of opioid analgesic abuse were estimated at $25.6 billion due to lost productivity, $25.9 billion due to health care costs, and $5.1 billion due to criminal justice costs, totaling $55.7 billion.5

Regardless of whether opioid analgesics are obtained by prescription or diversion (sharing medication, stolen, or purchased illegally), their misuse in all its forms is a significant public health problem. Opioid analgesic–related emergency department visits increased 111% from 2004 to 2008, to a total of 305,900 visits.6 Deaths involving opioid analgesics, including intentional and unintentional overdoses, quadrupled from 1999 to 2008.7 Additionally, from 1999 to 2009, national admission rates for treatment of an opioid analgesic–related substance use disorder increased nearly sixfold.7

Before treatment: Determine misuse risk

Despite their widespread use, long-term opioid analgesics are not recommended as first-choice therapy.8 Evidence supporting long-term efficacy is limited, and studies indicate modest clinical effectiveness.9 Concerns also are emerging about the safety of long-term opioid use, including iatrogenic opioid-related substance use disorders. Even categorizing opioid misuse is difficult because consensus is lacking on misuse terminology (TABLE 1).8,10-12

On the other hand, many patients with chronic pain do benefit from opioid analgesics, and most who are prescribed long-term opioid therapy do not misuse their medications. The use of opioid analgesics for chronic pain presents an opportunity for misuse in a subset of susceptible people.


Key Point

The use of opioid analgesics for chronic pain presents an opportunity for misuse in a subset of susceptible people.

TABLE 1

Glossary of of opioid use terminology

DSM, Diagnostic and Statistical Manual of Mental Disorders.
Aberrant drug-related behavior Opioid-related behavior that demonstrates nonadherence to the patient-clinician agreed-upon therapeutic plan8
Misuse Use of an opioid in a manner other than how it is prescribed10,11
Abuse Illicit opioid use that is detrimental to the user or others10 Nonmedical use of an opioid for the purpose of attaining a “high”11 A DSM-IV-TR substance use disorder diagnosis, evidenced by a maladaptive pattern of opioid use, leading to clinically significant impairment or distress as manifested by ≥1 of the following criteria in a 12-month period:
  • use resulting in failure to fulfill major role obligations

  • use when it is physically hazardous

  • continued use in spite of legal problems

  • continued use despite social or interpersonal problems12

Dependence A DSM-IV-TR substance use disorder diagnosis, evidenced by a maladaptive pattern of opioid use, leading to clinically significant impairment or distress as manifested by ≥3 of the following criteria in a 12-month period:
  • tolerance

  • withdrawal

  • opioid taken in larger amounts or over longer period than intended

  • inability to cut down

  • great deal of time spent obtaining and using opioids

  • reduced activities due to opioid use

  • continued use despite physical or psychological problems12

Risk factors thought to increase susceptibility include younger age, more severe pain intensity, multiple pain complaints, history of a substance use disorder, and history of a psychiatric disorder.2 Identifying individuals with potential for misuse is difficult, however, and clinicians’ attempts are not necessarily accurate.13

 

 

Screening tools. The APS/AAPM guidelines recommend empirically derived screening questionnaires (TABLE 2)8 to help you identify misuse potential before initiating opioid therapy. Instruments also are available to monitor misuse for individuals already in treatment. The Screener and Opioid Assessment for Patients with Pain (SOAPP) appears to be the most predictive of misuse potential, although selecting a screening instrument may depend on particular practice needs.14 These tools are most valuable when used within a comprehensive evaluation that includes the clinical interview with history and pain assessment.

TABLE 2

Questionnaires for screening and opioid misuse risk identification8

Risk assessment tools
Screener and Opioid Assessment for Patients with Pain (SOAPP) http://www.painedu.org/soapp.asp Predicts how much monitoring a patient will need on long-term opioid therapy
Opioid Risk Tool (ORT) http://www.partnersagainstpain.com/printouts/ Opioid_Risk_Tool.pdf Assesses for known conditions that indicate higher risk for medication misuse, including history of substance abuse, age, history of sexual abuse, and psychiatric disorders
Diagnosis, Intractability, Risk, Efficacy (DIRE) http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5& ved=0CEgQFjAE&url=http%3A%2F%2Fwww.fmdrl.org%2Findex.cfm%3Fevent%3Dc .getAttachment%26riid%3D6613&ei=vJ7lULDHFqKc2AWCiIGwAQ&usg=AFQjCNECSYFnam9UATA-Xm_JQ0cjm6Xdiw& bvm=bv.1355534169,d.b2I Assigns the patient a score of 1 to 3 for each of 4 factors: diagnosis, intractability, risk (psychological, chemical health, reliability, social support), and efficacy
Monitoring tools during long-term opioid therapy
Pain Assessment and Documentation Tool (PADT) http://www.ucdenver.edu/academics/colleges/PublicHealth/research/centers/maperc/online/Documents/Pain Assessment Documentation Tool %28PADT%29.pdf Assesses pain relief, daily functioning, and opioid-related adverse events; also whether patient appears to be engaging in potential aberrant drug-related behaviors
Current Opioid Misuse Measure (COMM) http://www.painedu.org/soapp.asp Assists in identifying patients exhibiting aberrant drug-related behaviors

When you identify someone at high risk of opioid misuse, proceed carefully using multiple sources of clinical information. Balance appropriate pain care with safeguarding against misuse. In the absence of evidence of current misuse, the decision depends on clinical judgment. You might try alternative pain treatments to avoid opioid exposure or consider opioid analgesics with additional monitoring of prescribing (TABLE 3).8

TABLE 3

Practical strategies for addressing opioid misuse8

Before treatment
  • Conduct a thorough history, including substances (alcohol and others)

  • Consider using empiric screening tools (TABLE 2)

  • Evaluate known risk factors

  • Consider nonopioid treatment with, or in place of, opioid therapy

  • Enhance monitoring for patients at moderate to high risk of misuse

  • Incorporate opioid prescribing guidelines into clinical practice

  • Set treatment goals and discuss expectations with the patient before starting opioid therapy

During treatment
  • Begin opioid trial, and base continuing therapy on clinical response

  • Routinely assess the patient; document opioid therapy efficacy, adverse effects, and evidence of misuse

  • Perform random urine drug screening, per policy

  • Obtain patient information from state’s prescription monitoring program

  • Address, evaluate, and respond to questionable use, per policy

When things go wrong
  • Evaluate behavior and determine course of action if questionable use occurs

  • Address questionable use with the patient

  • Evaluate benefit of continuing opioid therapy

  • Consider referral to an addiction specialist for consultation

  • Consider referral to a pain specialist

  • Initiate opioid taper if discontinuing; consider addiction consult if opioid use disorder is present

Managing risk during treatment

Opioid trial. The APS/AAPM panel8 and the World Health Organization analgesic ladder for treating cancer pain15 recommend an opioid trial before long-term opioids are prescribed. This approach assumes that opioid therapy may not be universally effective and appropriate for all patients and all pain complaints for which opioids are indicated.

By agreeing to an evaluation period, such as 30 days, you and your patient understand that opioid treatment may not continue beyond the trial without an accompanying treatment response. Whereas you may tailor specific outcomes to the individual, a successful response should include:

  • reduced pain

  • increased function (such as return to work or other valued activities)

  • and improved quality of life.

If the agreed-upon outcomes are not met, consider discontinuing the opioid trial and trying alternative treatments. Full discussion of the well-documented strategies for managing opioid therapy is beyond the scope of this article. (See other sources for information about strategies such as opioid rotation, which involves switching from one opioid to another in an effort to increase therapeutic benefit or reduce harm.16,17 )

Monitoring aids. In addition to screening and monitoring questionnaires, urine drug screens and prescription monitoring programs (PMPs) can help you objectively monitor for aberrant drug-related behaviors that may indicate misuse.

 

 

Urine drug screens can identify substance abuse or dependence and potential problems you might not have detected.2 When used appropriately, urine drug screens can provide useful information about an individual’s substance abuse potential (such as a positive test for an illicit substance). The absence of a prescribed opioid may be as significant as a positive finding because this may suggest compliance issues or diversion.

Prescription monitoring programs have been established by most states since 2002 through grants from the Department of Justice. PMPs store prescription drug information from pharmacies in a statewide database and develop algorithms that can detect behaviors suggesting opioid misuse.18 For example, an algorithm may track factors such as having 5 or more prescribers, 3 or more pharmacies, or 3 or more early refills within 1 year.19

Individual states administer PMPs differently, but prescribers generally can request information to monitor individual patients and detect illicit behaviors. Although relatively new, PMPs have been shown to reduce prescription sales,20 doctor shopping,19 and opioid analgesic misuse.21 A comprehensive list of state PMPs is available from the Alliance of States with Prescription Monitoring Programs (www.pmpalliance.org/content/pmp-access).


Key Point

Although relatively new, prescription monitoring programs have been shown to reduce doctor shopping and opioid analgesic misuse.

Responding to evidence of aberrant behavior

Even when you follow recommended opioid risk mitigation strategies, expect some individuals to show aberrant drug-taking behavior, abuse, or even the emergence of a co-occurring substance use disorder. Although evidence is limited regarding best practices in these circumstances, terminating opioid treatment is not necessarily the only option.8

Should you identify aberrant drug-related behaviors or any form of opioid analgesic misuse, evaluate the patient to determine the circumstances and immediately address the behavior. For example, using more medication than prescribed may be a sign of inadequately managed pain or clinical status, rather than an indication of abuse.

Referrals may be beneficial as part of your evaluation process. A pain specialist may offer alternative treatment approaches to mitigate medication overuse. An addiction specialist can evaluate patient safety for continued treatment with opioids, facilitate referrals for treatment of a substance use disorder, and provide consultation if discontinuing opioid therapy is appropriate.

Intervention. The patient’s pain complaint will persist whether or not you continue opioids, and substance abuse treatment may complement pain management. Even for an individual who continues opioid therapy, substance abuse treatment can provide tools for understanding and managing substance misuse. For instance, a cognitive-behavioral training program helped curb misuse and increase adherence in high-risk patients on opioid therapy for chronic back pain.22

Providing specialized care before you consider terminating opioid therapy allows people to address their reasons for misusing. Integrated treatment by a clinician specializing in co-occurring chronic pain and addiction may be particularly beneficial, as pain is an important motivator of individuals seeking treatment for an opioid use disorder.23

Termination. If, after additional resources and referral, an individual fails to make progress toward the therapeutic goal, you may need to terminate long-term opioid therapy. By making this decision, you may prevent the emergence of an opioid use disorder. Even so, telling someone that you are stopping opioid treatment can be a difficult discussion. The National Institute on Drug Abuse provides a wealth of online resources to assist with these and other opioid misuse conversations.24,25

Opioid detoxification is complex and should be managed and monitored to mitigate opioid withdrawal symptoms. Unfortunately, very little clinical guidance exists on effective opioid taper strategies for chronic pain patients. Consultation with an addiction specialist is recommended to assist with discontinuing treatment.

Future directions: A role for buprenorphine?

The introduction of transdermal buprenorphine in the United States in 2001 spurred new interest in this medication for treating moderate to severe chronic pain.26 Buprenorphine’s reported lower abuse potential may differentiate it from other opioid analgesics.27 Although a 2006 report showed evidence of modest diversion and abuse of buprenorphine,28 survey data and human laboratory studies demonstrate consistently that the abuse potential is lower—particularly with the combined buprenorphine/naloxone formulation—than with other opioids.29

Sublingual buprenorphine formulations, with and without naloxone, are FDA approved for opioid use disorder and opioid dependence, but not for pain. Thus, it is a medication with analgesic properties that is approved for an opioid use disorder. Some preliminary evidence supports off-label use of sublingual buprenorphine for chronic pain,30 but more research is needed before this approach can be recommended.

Additional clinical studies are examining whether the sublingual formulation’s efficacy for pain is comparable to other buprenorphine formulations. If this is supported, buprenorphine may become an appropriate, safer option for patients at risk of misusing who might benefit from continued opioid therapy.

 

 


Key Point

Some preliminary evidence supports off-label use of sublingual buprenorphine for chronic pain, but more research is needed.

Maintaining a rational, evidence-based approach

Opioid analgesic misuse is a serious public health problem. It would be unfortunate, however, if clinicians were to avoid medically appropriate opioid prescribing for people with chronic pain. Rational, evidence-based strategies to mitigate opioid misuse are the appropriate goal, accompanied by efforts to improve chronic pain treatment with and without opioids. To provide safe and effective opioid therapy, we urge you to develop a proactive approach informed by clinical guidelines, clinical experience, and the scientific literature.


Key Point

While opioid analgesic misuse is a serious problem, it would be unfortunate if clinicians avoided prescribing opioids for people in chronic pain.


Disclosure

Dr. Potter receives grant support from the National Institute on Drug Abuse K23 DA02297 (Potter) and U10 DA020024 (Trivedi) and serves as a consultant to Observant LLC. Ms. Marino reported no potential conflict of interest relevant to this article.

References

  1. Results from the 2011 National Survey on Drug Use and Health: summary of national findings. Rockville, MD: Substance Abuse and Mental Health Services Administration, 2012. Available at: http://www.samhsa.gov/data/NSDUH.aspx. Accessed December 26, 2012.
  2. Sehgal N, Manchikanti L, Smith HS. Prescription opioid abuse in chronic pain: a review of opioid abuse predictors and strategies to curb opioid abuse. Pain Physician. 2012;15(3 suppl):ES67–E92.
  3. Manchikanti L, Fellows B, Ailinani H, et al. Therapeutic use, abuse, and nonmedical use of opioids: a ten-year perspective. Pain Physician. 2010;13:401–435.
  4. Boudreau D, Von Korff M, Rutter CM, et al. Trends in long-term opioid therapy for chronic non-cancer pain. Pharmacoepidemiol Drug Saf. 2009;18:1166–1175.
  5. Birnbaum HG, White AG, Schiller M, et al. Societal costs of prescription opioid abuse, dependence, and misuse in the United States. Pain Med. 2011;12:657–667.
  6. Centers for Disease Control and Prevention. Emergency department visits involving nonmedical use of selected prescription drugs - United States, 2004-2008. MMWR Morb Mortal Wkly Rep. 2010;59:705–734.
  7. Centers for Disease Control and Prevention. Overdoses of prescription opioid pain relievers - United States, 1999-2008. MMWR Morb Mortal Wkly Rep. 2011;60:1487–1492.
  8. Chou R, Fanciullo GJ, Fine PG, et al. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. J Pain. 2009;10:113–130.
  9. Martell BA, O’Connor PG, Kerns RD, et al. Systematic review: opioid treatment for chronic back pain: prevalence, efficacy, and association with addiction. Ann Intern Med. 2007;146:116–127.
  10. Butler SF, Budman SH, Fernandez KC, et al. Development and validation of the Current Opioid Misuse Measure. Pain. 2007;130:144–156.
  11. Katz NP, Adams EH, Chilcoat H, et al. Challenges in the development of prescription opioid abuse-deterrent formulations. Clin J Pain. 2007;23:648–660.
  12. Diagnostic and Statistical Manual of Mental Disorders, 4th ed., text rev. Washington, DC: American Psychiatric Association, 2000.
  13. Katz N, Fanciullo GJ. Role of urine toxicology testing in the management of chronic opioid therapy. Clin J Pain. 2002;18(4 suppl):S76–S82.
  14. Moore TM, Jones T, Browder JH, et al. A comparison of common screening methods for predicting aberrant drug-related behavior among patients receiving opioids for chronic pain management. Pain Med. 2009;10:1426–1433.
  15. World Health Organization. Cancer: WHO’s pain ladder. Available at: http://www.who.int/cancer/palliative/painladder/en. Accessed December 26, 2012.
  16. Fine PG, Portenoy RK. Establishing “best practices” for opioid rotation: conclusions of an expert panel. J Pain Symptom Manage. 2009;38:418–425.
  17. Ballantyne JC, Mao J. Opioid therapy for chronic pain. N Engl J Med. 2003;349:1943–1953.
  18. Worley J. Prescription drug monitoring programs, a response to doctor shopping: purpose, effectiveness, and directions for future research. Issues Ment Health Nurs. 2012;33:319–328.
  19. Katz N, Panas L, Kim M, et al. Usefulness of prescription monitoring programs for surveillance—analysis of Schedule II opioid prescription data in Massachusetts, 1996-2006. Pharmacoepidemiol Drug Saf. 2010;19:115–123.
  20. Simeone R, Holland L. An evaluation of prescription monitoring programs, September 1, 2006. Available at: https://www.bja.gov/publications/pdmpexecsumm.pdf. Accessed December 26, 2012.
  21. Wang J, Christo PJ. The influence of prescription monitoring programs on chronic pain management. Pain Physician. 2009;12:507–515.
  22. Jamison RN, Ross EL, Michna E, et al. Substance misuse treatment for high-risk chronic pain patients on opioid therapy: a randomized trial. Pain. 2010;150:390–400.
  23. Potter JS.  Co-occurring chronic pain and opioid addiction: is there a role for integrated treatment? Honolulu, HI: American Psychiatric Association Annual Meeting, 2011.
  24. National Institute on Drug Abuse. Talking to patients about sensitive topics: communication and screening techniques for increasing the reliability of patient self-report. Available at: http://www.drugabuse.gov/nidamed/centers-excellence/resources/talking-to-patients-about-sensitive-topics-communication-screening-techniques-increasing. Accessed January 10, 2013.
  25. National Institute on Drug Abuse. Managing pain patients who abuse prescription drugs. Available at: http://www.drugabuse.gov/nidamed/etools/managing-pain-patients-who-abuse-prescription-drugs. Accessed January 10, 2013.
  26. Pergolizzi J, Aloisi AM, Dahan A, et al. Current knowledge of buprenorphine and its unique pharmacological profile. Pain Pract. 2010;10:428–450.
  27. Park HS, Lee HY, Kim YH, et al. A highly selective kappa-opioid receptor agonist with low addictive potential and dependence liability. Bioorg Med Chem Lett. 2006;16:3609–3613.
  28. Substance Abuse and Mental Health Services Administration. Diversion and abuse of buprenorphine: a brief assessment of emerging indicators. Final report, 2006. Available at: http://buprenorphine.samhsa.gov. Accessed December 26, 2012.
  29. Comer SD, Sullivan MA, Vosburg SK, et al. Abuse liability of intravenous buprenorphine/naloxone and buprenorphine alone in buprenorphine-maintained intravenous heroin abusers. Addiction. 2010;105:709–718.
  30. Malinoff HL, Barkin RL, Wilson G. Sublingual buprenorphine is effective in the treatment of chronic pain syndrome. Am J Ther. 2005;12:379–384.
Author and Disclosure Information

Jennifer Sharpe Potter, PhD, MPH, and Elise N. Marino, BA, Department of Psychiatry, The University of Texas Health Science Center at San Antonio

Publications
Topics
Author and Disclosure Information

Jennifer Sharpe Potter, PhD, MPH, and Elise N. Marino, BA, Department of Psychiatry, The University of Texas Health Science Center at San Antonio

Author and Disclosure Information

Jennifer Sharpe Potter, PhD, MPH, and Elise N. Marino, BA, Department of Psychiatry, The University of Texas Health Science Center at San Antonio

Opioids have become the standard of care for numerous chronic pain complaints and are the most misused drugs in the United States.1 The result: A public health issue with challenges for patients with pain, clinicians treating pain, and the broader community. (See “Opioid analgesic misuse: Scope of the problem,” below1-7).

Ultimately, clinicians are faced with trying to provide adequate pain relief while predicting which patients are at risk for misuse. An expert panel commissioned by the American Pain Society and American Academy of Pain Medicine (APS/AAPM) reviewed the evidence and issued clinical guidelines for long-term opioid therapy in chronic noncancer pain.8 Using the APS/AAPM framework, this article discusses how to:

  • identify the risk of problem use in the individual patient

  • monitor opioid therapy to ensure safe prescribing

  • determine when to terminate opioid therapy in cases of opioid misuse.  


OPIOID ANALGESIC MISUSE: SCOPE OF THE PROBLEM

Americans consume an estimated 80% of the global supply of prescription opioids.2 From 1997 to 2007, average sales of opioid analgesics per person increased 402%.3 Because opioid analgesics are increasingly available in the community,4 the prevalence of opioid misuse has followed suit. Opioid analgesics have become the most misused drug class in the United States—second only to marijuana among all illicit substances.1

Nonmedical users of opioid analgesics numbered 4.5 million in 2011, and 1.8 million opioid analgesic users met diagnostic criteria for dependence or abuse.1 In 2007, the costs to society of opioid analgesic abuse were estimated at $25.6 billion due to lost productivity, $25.9 billion due to health care costs, and $5.1 billion due to criminal justice costs, totaling $55.7 billion.5

Regardless of whether opioid analgesics are obtained by prescription or diversion (sharing medication, stolen, or purchased illegally), their misuse in all its forms is a significant public health problem. Opioid analgesic–related emergency department visits increased 111% from 2004 to 2008, to a total of 305,900 visits.6 Deaths involving opioid analgesics, including intentional and unintentional overdoses, quadrupled from 1999 to 2008.7 Additionally, from 1999 to 2009, national admission rates for treatment of an opioid analgesic–related substance use disorder increased nearly sixfold.7

Before treatment: Determine misuse risk

Despite their widespread use, long-term opioid analgesics are not recommended as first-choice therapy.8 Evidence supporting long-term efficacy is limited, and studies indicate modest clinical effectiveness.9 Concerns also are emerging about the safety of long-term opioid use, including iatrogenic opioid-related substance use disorders. Even categorizing opioid misuse is difficult because consensus is lacking on misuse terminology (TABLE 1).8,10-12

On the other hand, many patients with chronic pain do benefit from opioid analgesics, and most who are prescribed long-term opioid therapy do not misuse their medications. The use of opioid analgesics for chronic pain presents an opportunity for misuse in a subset of susceptible people.


Key Point

The use of opioid analgesics for chronic pain presents an opportunity for misuse in a subset of susceptible people.

TABLE 1

Glossary of of opioid use terminology

DSM, Diagnostic and Statistical Manual of Mental Disorders.
Aberrant drug-related behavior Opioid-related behavior that demonstrates nonadherence to the patient-clinician agreed-upon therapeutic plan8
Misuse Use of an opioid in a manner other than how it is prescribed10,11
Abuse Illicit opioid use that is detrimental to the user or others10 Nonmedical use of an opioid for the purpose of attaining a “high”11 A DSM-IV-TR substance use disorder diagnosis, evidenced by a maladaptive pattern of opioid use, leading to clinically significant impairment or distress as manifested by ≥1 of the following criteria in a 12-month period:
  • use resulting in failure to fulfill major role obligations

  • use when it is physically hazardous

  • continued use in spite of legal problems

  • continued use despite social or interpersonal problems12

Dependence A DSM-IV-TR substance use disorder diagnosis, evidenced by a maladaptive pattern of opioid use, leading to clinically significant impairment or distress as manifested by ≥3 of the following criteria in a 12-month period:
  • tolerance

  • withdrawal

  • opioid taken in larger amounts or over longer period than intended

  • inability to cut down

  • great deal of time spent obtaining and using opioids

  • reduced activities due to opioid use

  • continued use despite physical or psychological problems12

Risk factors thought to increase susceptibility include younger age, more severe pain intensity, multiple pain complaints, history of a substance use disorder, and history of a psychiatric disorder.2 Identifying individuals with potential for misuse is difficult, however, and clinicians’ attempts are not necessarily accurate.13

 

 

Screening tools. The APS/AAPM guidelines recommend empirically derived screening questionnaires (TABLE 2)8 to help you identify misuse potential before initiating opioid therapy. Instruments also are available to monitor misuse for individuals already in treatment. The Screener and Opioid Assessment for Patients with Pain (SOAPP) appears to be the most predictive of misuse potential, although selecting a screening instrument may depend on particular practice needs.14 These tools are most valuable when used within a comprehensive evaluation that includes the clinical interview with history and pain assessment.

TABLE 2

Questionnaires for screening and opioid misuse risk identification8

Risk assessment tools
Screener and Opioid Assessment for Patients with Pain (SOAPP) http://www.painedu.org/soapp.asp Predicts how much monitoring a patient will need on long-term opioid therapy
Opioid Risk Tool (ORT) http://www.partnersagainstpain.com/printouts/ Opioid_Risk_Tool.pdf Assesses for known conditions that indicate higher risk for medication misuse, including history of substance abuse, age, history of sexual abuse, and psychiatric disorders
Diagnosis, Intractability, Risk, Efficacy (DIRE) http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5& ved=0CEgQFjAE&url=http%3A%2F%2Fwww.fmdrl.org%2Findex.cfm%3Fevent%3Dc .getAttachment%26riid%3D6613&ei=vJ7lULDHFqKc2AWCiIGwAQ&usg=AFQjCNECSYFnam9UATA-Xm_JQ0cjm6Xdiw& bvm=bv.1355534169,d.b2I Assigns the patient a score of 1 to 3 for each of 4 factors: diagnosis, intractability, risk (psychological, chemical health, reliability, social support), and efficacy
Monitoring tools during long-term opioid therapy
Pain Assessment and Documentation Tool (PADT) http://www.ucdenver.edu/academics/colleges/PublicHealth/research/centers/maperc/online/Documents/Pain Assessment Documentation Tool %28PADT%29.pdf Assesses pain relief, daily functioning, and opioid-related adverse events; also whether patient appears to be engaging in potential aberrant drug-related behaviors
Current Opioid Misuse Measure (COMM) http://www.painedu.org/soapp.asp Assists in identifying patients exhibiting aberrant drug-related behaviors

When you identify someone at high risk of opioid misuse, proceed carefully using multiple sources of clinical information. Balance appropriate pain care with safeguarding against misuse. In the absence of evidence of current misuse, the decision depends on clinical judgment. You might try alternative pain treatments to avoid opioid exposure or consider opioid analgesics with additional monitoring of prescribing (TABLE 3).8

TABLE 3

Practical strategies for addressing opioid misuse8

Before treatment
  • Conduct a thorough history, including substances (alcohol and others)

  • Consider using empiric screening tools (TABLE 2)

  • Evaluate known risk factors

  • Consider nonopioid treatment with, or in place of, opioid therapy

  • Enhance monitoring for patients at moderate to high risk of misuse

  • Incorporate opioid prescribing guidelines into clinical practice

  • Set treatment goals and discuss expectations with the patient before starting opioid therapy

During treatment
  • Begin opioid trial, and base continuing therapy on clinical response

  • Routinely assess the patient; document opioid therapy efficacy, adverse effects, and evidence of misuse

  • Perform random urine drug screening, per policy

  • Obtain patient information from state’s prescription monitoring program

  • Address, evaluate, and respond to questionable use, per policy

When things go wrong
  • Evaluate behavior and determine course of action if questionable use occurs

  • Address questionable use with the patient

  • Evaluate benefit of continuing opioid therapy

  • Consider referral to an addiction specialist for consultation

  • Consider referral to a pain specialist

  • Initiate opioid taper if discontinuing; consider addiction consult if opioid use disorder is present

Managing risk during treatment

Opioid trial. The APS/AAPM panel8 and the World Health Organization analgesic ladder for treating cancer pain15 recommend an opioid trial before long-term opioids are prescribed. This approach assumes that opioid therapy may not be universally effective and appropriate for all patients and all pain complaints for which opioids are indicated.

By agreeing to an evaluation period, such as 30 days, you and your patient understand that opioid treatment may not continue beyond the trial without an accompanying treatment response. Whereas you may tailor specific outcomes to the individual, a successful response should include:

  • reduced pain

  • increased function (such as return to work or other valued activities)

  • and improved quality of life.

If the agreed-upon outcomes are not met, consider discontinuing the opioid trial and trying alternative treatments. Full discussion of the well-documented strategies for managing opioid therapy is beyond the scope of this article. (See other sources for information about strategies such as opioid rotation, which involves switching from one opioid to another in an effort to increase therapeutic benefit or reduce harm.16,17 )

Monitoring aids. In addition to screening and monitoring questionnaires, urine drug screens and prescription monitoring programs (PMPs) can help you objectively monitor for aberrant drug-related behaviors that may indicate misuse.

 

 

Urine drug screens can identify substance abuse or dependence and potential problems you might not have detected.2 When used appropriately, urine drug screens can provide useful information about an individual’s substance abuse potential (such as a positive test for an illicit substance). The absence of a prescribed opioid may be as significant as a positive finding because this may suggest compliance issues or diversion.

Prescription monitoring programs have been established by most states since 2002 through grants from the Department of Justice. PMPs store prescription drug information from pharmacies in a statewide database and develop algorithms that can detect behaviors suggesting opioid misuse.18 For example, an algorithm may track factors such as having 5 or more prescribers, 3 or more pharmacies, or 3 or more early refills within 1 year.19

Individual states administer PMPs differently, but prescribers generally can request information to monitor individual patients and detect illicit behaviors. Although relatively new, PMPs have been shown to reduce prescription sales,20 doctor shopping,19 and opioid analgesic misuse.21 A comprehensive list of state PMPs is available from the Alliance of States with Prescription Monitoring Programs (www.pmpalliance.org/content/pmp-access).


Key Point

Although relatively new, prescription monitoring programs have been shown to reduce doctor shopping and opioid analgesic misuse.

Responding to evidence of aberrant behavior

Even when you follow recommended opioid risk mitigation strategies, expect some individuals to show aberrant drug-taking behavior, abuse, or even the emergence of a co-occurring substance use disorder. Although evidence is limited regarding best practices in these circumstances, terminating opioid treatment is not necessarily the only option.8

Should you identify aberrant drug-related behaviors or any form of opioid analgesic misuse, evaluate the patient to determine the circumstances and immediately address the behavior. For example, using more medication than prescribed may be a sign of inadequately managed pain or clinical status, rather than an indication of abuse.

Referrals may be beneficial as part of your evaluation process. A pain specialist may offer alternative treatment approaches to mitigate medication overuse. An addiction specialist can evaluate patient safety for continued treatment with opioids, facilitate referrals for treatment of a substance use disorder, and provide consultation if discontinuing opioid therapy is appropriate.

Intervention. The patient’s pain complaint will persist whether or not you continue opioids, and substance abuse treatment may complement pain management. Even for an individual who continues opioid therapy, substance abuse treatment can provide tools for understanding and managing substance misuse. For instance, a cognitive-behavioral training program helped curb misuse and increase adherence in high-risk patients on opioid therapy for chronic back pain.22

Providing specialized care before you consider terminating opioid therapy allows people to address their reasons for misusing. Integrated treatment by a clinician specializing in co-occurring chronic pain and addiction may be particularly beneficial, as pain is an important motivator of individuals seeking treatment for an opioid use disorder.23

Termination. If, after additional resources and referral, an individual fails to make progress toward the therapeutic goal, you may need to terminate long-term opioid therapy. By making this decision, you may prevent the emergence of an opioid use disorder. Even so, telling someone that you are stopping opioid treatment can be a difficult discussion. The National Institute on Drug Abuse provides a wealth of online resources to assist with these and other opioid misuse conversations.24,25

Opioid detoxification is complex and should be managed and monitored to mitigate opioid withdrawal symptoms. Unfortunately, very little clinical guidance exists on effective opioid taper strategies for chronic pain patients. Consultation with an addiction specialist is recommended to assist with discontinuing treatment.

Future directions: A role for buprenorphine?

The introduction of transdermal buprenorphine in the United States in 2001 spurred new interest in this medication for treating moderate to severe chronic pain.26 Buprenorphine’s reported lower abuse potential may differentiate it from other opioid analgesics.27 Although a 2006 report showed evidence of modest diversion and abuse of buprenorphine,28 survey data and human laboratory studies demonstrate consistently that the abuse potential is lower—particularly with the combined buprenorphine/naloxone formulation—than with other opioids.29

Sublingual buprenorphine formulations, with and without naloxone, are FDA approved for opioid use disorder and opioid dependence, but not for pain. Thus, it is a medication with analgesic properties that is approved for an opioid use disorder. Some preliminary evidence supports off-label use of sublingual buprenorphine for chronic pain,30 but more research is needed before this approach can be recommended.

Additional clinical studies are examining whether the sublingual formulation’s efficacy for pain is comparable to other buprenorphine formulations. If this is supported, buprenorphine may become an appropriate, safer option for patients at risk of misusing who might benefit from continued opioid therapy.

 

 


Key Point

Some preliminary evidence supports off-label use of sublingual buprenorphine for chronic pain, but more research is needed.

Maintaining a rational, evidence-based approach

Opioid analgesic misuse is a serious public health problem. It would be unfortunate, however, if clinicians were to avoid medically appropriate opioid prescribing for people with chronic pain. Rational, evidence-based strategies to mitigate opioid misuse are the appropriate goal, accompanied by efforts to improve chronic pain treatment with and without opioids. To provide safe and effective opioid therapy, we urge you to develop a proactive approach informed by clinical guidelines, clinical experience, and the scientific literature.


Key Point

While opioid analgesic misuse is a serious problem, it would be unfortunate if clinicians avoided prescribing opioids for people in chronic pain.


Disclosure

Dr. Potter receives grant support from the National Institute on Drug Abuse K23 DA02297 (Potter) and U10 DA020024 (Trivedi) and serves as a consultant to Observant LLC. Ms. Marino reported no potential conflict of interest relevant to this article.

References

  1. Results from the 2011 National Survey on Drug Use and Health: summary of national findings. Rockville, MD: Substance Abuse and Mental Health Services Administration, 2012. Available at: http://www.samhsa.gov/data/NSDUH.aspx. Accessed December 26, 2012.
  2. Sehgal N, Manchikanti L, Smith HS. Prescription opioid abuse in chronic pain: a review of opioid abuse predictors and strategies to curb opioid abuse. Pain Physician. 2012;15(3 suppl):ES67–E92.
  3. Manchikanti L, Fellows B, Ailinani H, et al. Therapeutic use, abuse, and nonmedical use of opioids: a ten-year perspective. Pain Physician. 2010;13:401–435.
  4. Boudreau D, Von Korff M, Rutter CM, et al. Trends in long-term opioid therapy for chronic non-cancer pain. Pharmacoepidemiol Drug Saf. 2009;18:1166–1175.
  5. Birnbaum HG, White AG, Schiller M, et al. Societal costs of prescription opioid abuse, dependence, and misuse in the United States. Pain Med. 2011;12:657–667.
  6. Centers for Disease Control and Prevention. Emergency department visits involving nonmedical use of selected prescription drugs - United States, 2004-2008. MMWR Morb Mortal Wkly Rep. 2010;59:705–734.
  7. Centers for Disease Control and Prevention. Overdoses of prescription opioid pain relievers - United States, 1999-2008. MMWR Morb Mortal Wkly Rep. 2011;60:1487–1492.
  8. Chou R, Fanciullo GJ, Fine PG, et al. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. J Pain. 2009;10:113–130.
  9. Martell BA, O’Connor PG, Kerns RD, et al. Systematic review: opioid treatment for chronic back pain: prevalence, efficacy, and association with addiction. Ann Intern Med. 2007;146:116–127.
  10. Butler SF, Budman SH, Fernandez KC, et al. Development and validation of the Current Opioid Misuse Measure. Pain. 2007;130:144–156.
  11. Katz NP, Adams EH, Chilcoat H, et al. Challenges in the development of prescription opioid abuse-deterrent formulations. Clin J Pain. 2007;23:648–660.
  12. Diagnostic and Statistical Manual of Mental Disorders, 4th ed., text rev. Washington, DC: American Psychiatric Association, 2000.
  13. Katz N, Fanciullo GJ. Role of urine toxicology testing in the management of chronic opioid therapy. Clin J Pain. 2002;18(4 suppl):S76–S82.
  14. Moore TM, Jones T, Browder JH, et al. A comparison of common screening methods for predicting aberrant drug-related behavior among patients receiving opioids for chronic pain management. Pain Med. 2009;10:1426–1433.
  15. World Health Organization. Cancer: WHO’s pain ladder. Available at: http://www.who.int/cancer/palliative/painladder/en. Accessed December 26, 2012.
  16. Fine PG, Portenoy RK. Establishing “best practices” for opioid rotation: conclusions of an expert panel. J Pain Symptom Manage. 2009;38:418–425.
  17. Ballantyne JC, Mao J. Opioid therapy for chronic pain. N Engl J Med. 2003;349:1943–1953.
  18. Worley J. Prescription drug monitoring programs, a response to doctor shopping: purpose, effectiveness, and directions for future research. Issues Ment Health Nurs. 2012;33:319–328.
  19. Katz N, Panas L, Kim M, et al. Usefulness of prescription monitoring programs for surveillance—analysis of Schedule II opioid prescription data in Massachusetts, 1996-2006. Pharmacoepidemiol Drug Saf. 2010;19:115–123.
  20. Simeone R, Holland L. An evaluation of prescription monitoring programs, September 1, 2006. Available at: https://www.bja.gov/publications/pdmpexecsumm.pdf. Accessed December 26, 2012.
  21. Wang J, Christo PJ. The influence of prescription monitoring programs on chronic pain management. Pain Physician. 2009;12:507–515.
  22. Jamison RN, Ross EL, Michna E, et al. Substance misuse treatment for high-risk chronic pain patients on opioid therapy: a randomized trial. Pain. 2010;150:390–400.
  23. Potter JS.  Co-occurring chronic pain and opioid addiction: is there a role for integrated treatment? Honolulu, HI: American Psychiatric Association Annual Meeting, 2011.
  24. National Institute on Drug Abuse. Talking to patients about sensitive topics: communication and screening techniques for increasing the reliability of patient self-report. Available at: http://www.drugabuse.gov/nidamed/centers-excellence/resources/talking-to-patients-about-sensitive-topics-communication-screening-techniques-increasing. Accessed January 10, 2013.
  25. National Institute on Drug Abuse. Managing pain patients who abuse prescription drugs. Available at: http://www.drugabuse.gov/nidamed/etools/managing-pain-patients-who-abuse-prescription-drugs. Accessed January 10, 2013.
  26. Pergolizzi J, Aloisi AM, Dahan A, et al. Current knowledge of buprenorphine and its unique pharmacological profile. Pain Pract. 2010;10:428–450.
  27. Park HS, Lee HY, Kim YH, et al. A highly selective kappa-opioid receptor agonist with low addictive potential and dependence liability. Bioorg Med Chem Lett. 2006;16:3609–3613.
  28. Substance Abuse and Mental Health Services Administration. Diversion and abuse of buprenorphine: a brief assessment of emerging indicators. Final report, 2006. Available at: http://buprenorphine.samhsa.gov. Accessed December 26, 2012.
  29. Comer SD, Sullivan MA, Vosburg SK, et al. Abuse liability of intravenous buprenorphine/naloxone and buprenorphine alone in buprenorphine-maintained intravenous heroin abusers. Addiction. 2010;105:709–718.
  30. Malinoff HL, Barkin RL, Wilson G. Sublingual buprenorphine is effective in the treatment of chronic pain syndrome. Am J Ther. 2005;12:379–384.

Opioids have become the standard of care for numerous chronic pain complaints and are the most misused drugs in the United States.1 The result: A public health issue with challenges for patients with pain, clinicians treating pain, and the broader community. (See “Opioid analgesic misuse: Scope of the problem,” below1-7).

Ultimately, clinicians are faced with trying to provide adequate pain relief while predicting which patients are at risk for misuse. An expert panel commissioned by the American Pain Society and American Academy of Pain Medicine (APS/AAPM) reviewed the evidence and issued clinical guidelines for long-term opioid therapy in chronic noncancer pain.8 Using the APS/AAPM framework, this article discusses how to:

  • identify the risk of problem use in the individual patient

  • monitor opioid therapy to ensure safe prescribing

  • determine when to terminate opioid therapy in cases of opioid misuse.  


OPIOID ANALGESIC MISUSE: SCOPE OF THE PROBLEM

Americans consume an estimated 80% of the global supply of prescription opioids.2 From 1997 to 2007, average sales of opioid analgesics per person increased 402%.3 Because opioid analgesics are increasingly available in the community,4 the prevalence of opioid misuse has followed suit. Opioid analgesics have become the most misused drug class in the United States—second only to marijuana among all illicit substances.1

Nonmedical users of opioid analgesics numbered 4.5 million in 2011, and 1.8 million opioid analgesic users met diagnostic criteria for dependence or abuse.1 In 2007, the costs to society of opioid analgesic abuse were estimated at $25.6 billion due to lost productivity, $25.9 billion due to health care costs, and $5.1 billion due to criminal justice costs, totaling $55.7 billion.5

Regardless of whether opioid analgesics are obtained by prescription or diversion (sharing medication, stolen, or purchased illegally), their misuse in all its forms is a significant public health problem. Opioid analgesic–related emergency department visits increased 111% from 2004 to 2008, to a total of 305,900 visits.6 Deaths involving opioid analgesics, including intentional and unintentional overdoses, quadrupled from 1999 to 2008.7 Additionally, from 1999 to 2009, national admission rates for treatment of an opioid analgesic–related substance use disorder increased nearly sixfold.7

Before treatment: Determine misuse risk

Despite their widespread use, long-term opioid analgesics are not recommended as first-choice therapy.8 Evidence supporting long-term efficacy is limited, and studies indicate modest clinical effectiveness.9 Concerns also are emerging about the safety of long-term opioid use, including iatrogenic opioid-related substance use disorders. Even categorizing opioid misuse is difficult because consensus is lacking on misuse terminology (TABLE 1).8,10-12

On the other hand, many patients with chronic pain do benefit from opioid analgesics, and most who are prescribed long-term opioid therapy do not misuse their medications. The use of opioid analgesics for chronic pain presents an opportunity for misuse in a subset of susceptible people.


Key Point

The use of opioid analgesics for chronic pain presents an opportunity for misuse in a subset of susceptible people.

TABLE 1

Glossary of of opioid use terminology

DSM, Diagnostic and Statistical Manual of Mental Disorders.
Aberrant drug-related behavior Opioid-related behavior that demonstrates nonadherence to the patient-clinician agreed-upon therapeutic plan8
Misuse Use of an opioid in a manner other than how it is prescribed10,11
Abuse Illicit opioid use that is detrimental to the user or others10 Nonmedical use of an opioid for the purpose of attaining a “high”11 A DSM-IV-TR substance use disorder diagnosis, evidenced by a maladaptive pattern of opioid use, leading to clinically significant impairment or distress as manifested by ≥1 of the following criteria in a 12-month period:
  • use resulting in failure to fulfill major role obligations

  • use when it is physically hazardous

  • continued use in spite of legal problems

  • continued use despite social or interpersonal problems12

Dependence A DSM-IV-TR substance use disorder diagnosis, evidenced by a maladaptive pattern of opioid use, leading to clinically significant impairment or distress as manifested by ≥3 of the following criteria in a 12-month period:
  • tolerance

  • withdrawal

  • opioid taken in larger amounts or over longer period than intended

  • inability to cut down

  • great deal of time spent obtaining and using opioids

  • reduced activities due to opioid use

  • continued use despite physical or psychological problems12

Risk factors thought to increase susceptibility include younger age, more severe pain intensity, multiple pain complaints, history of a substance use disorder, and history of a psychiatric disorder.2 Identifying individuals with potential for misuse is difficult, however, and clinicians’ attempts are not necessarily accurate.13

 

 

Screening tools. The APS/AAPM guidelines recommend empirically derived screening questionnaires (TABLE 2)8 to help you identify misuse potential before initiating opioid therapy. Instruments also are available to monitor misuse for individuals already in treatment. The Screener and Opioid Assessment for Patients with Pain (SOAPP) appears to be the most predictive of misuse potential, although selecting a screening instrument may depend on particular practice needs.14 These tools are most valuable when used within a comprehensive evaluation that includes the clinical interview with history and pain assessment.

TABLE 2

Questionnaires for screening and opioid misuse risk identification8

Risk assessment tools
Screener and Opioid Assessment for Patients with Pain (SOAPP) http://www.painedu.org/soapp.asp Predicts how much monitoring a patient will need on long-term opioid therapy
Opioid Risk Tool (ORT) http://www.partnersagainstpain.com/printouts/ Opioid_Risk_Tool.pdf Assesses for known conditions that indicate higher risk for medication misuse, including history of substance abuse, age, history of sexual abuse, and psychiatric disorders
Diagnosis, Intractability, Risk, Efficacy (DIRE) http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5& ved=0CEgQFjAE&url=http%3A%2F%2Fwww.fmdrl.org%2Findex.cfm%3Fevent%3Dc .getAttachment%26riid%3D6613&ei=vJ7lULDHFqKc2AWCiIGwAQ&usg=AFQjCNECSYFnam9UATA-Xm_JQ0cjm6Xdiw& bvm=bv.1355534169,d.b2I Assigns the patient a score of 1 to 3 for each of 4 factors: diagnosis, intractability, risk (psychological, chemical health, reliability, social support), and efficacy
Monitoring tools during long-term opioid therapy
Pain Assessment and Documentation Tool (PADT) http://www.ucdenver.edu/academics/colleges/PublicHealth/research/centers/maperc/online/Documents/Pain Assessment Documentation Tool %28PADT%29.pdf Assesses pain relief, daily functioning, and opioid-related adverse events; also whether patient appears to be engaging in potential aberrant drug-related behaviors
Current Opioid Misuse Measure (COMM) http://www.painedu.org/soapp.asp Assists in identifying patients exhibiting aberrant drug-related behaviors

When you identify someone at high risk of opioid misuse, proceed carefully using multiple sources of clinical information. Balance appropriate pain care with safeguarding against misuse. In the absence of evidence of current misuse, the decision depends on clinical judgment. You might try alternative pain treatments to avoid opioid exposure or consider opioid analgesics with additional monitoring of prescribing (TABLE 3).8

TABLE 3

Practical strategies for addressing opioid misuse8

Before treatment
  • Conduct a thorough history, including substances (alcohol and others)

  • Consider using empiric screening tools (TABLE 2)

  • Evaluate known risk factors

  • Consider nonopioid treatment with, or in place of, opioid therapy

  • Enhance monitoring for patients at moderate to high risk of misuse

  • Incorporate opioid prescribing guidelines into clinical practice

  • Set treatment goals and discuss expectations with the patient before starting opioid therapy

During treatment
  • Begin opioid trial, and base continuing therapy on clinical response

  • Routinely assess the patient; document opioid therapy efficacy, adverse effects, and evidence of misuse

  • Perform random urine drug screening, per policy

  • Obtain patient information from state’s prescription monitoring program

  • Address, evaluate, and respond to questionable use, per policy

When things go wrong
  • Evaluate behavior and determine course of action if questionable use occurs

  • Address questionable use with the patient

  • Evaluate benefit of continuing opioid therapy

  • Consider referral to an addiction specialist for consultation

  • Consider referral to a pain specialist

  • Initiate opioid taper if discontinuing; consider addiction consult if opioid use disorder is present

Managing risk during treatment

Opioid trial. The APS/AAPM panel8 and the World Health Organization analgesic ladder for treating cancer pain15 recommend an opioid trial before long-term opioids are prescribed. This approach assumes that opioid therapy may not be universally effective and appropriate for all patients and all pain complaints for which opioids are indicated.

By agreeing to an evaluation period, such as 30 days, you and your patient understand that opioid treatment may not continue beyond the trial without an accompanying treatment response. Whereas you may tailor specific outcomes to the individual, a successful response should include:

  • reduced pain

  • increased function (such as return to work or other valued activities)

  • and improved quality of life.

If the agreed-upon outcomes are not met, consider discontinuing the opioid trial and trying alternative treatments. Full discussion of the well-documented strategies for managing opioid therapy is beyond the scope of this article. (See other sources for information about strategies such as opioid rotation, which involves switching from one opioid to another in an effort to increase therapeutic benefit or reduce harm.16,17 )

Monitoring aids. In addition to screening and monitoring questionnaires, urine drug screens and prescription monitoring programs (PMPs) can help you objectively monitor for aberrant drug-related behaviors that may indicate misuse.

 

 

Urine drug screens can identify substance abuse or dependence and potential problems you might not have detected.2 When used appropriately, urine drug screens can provide useful information about an individual’s substance abuse potential (such as a positive test for an illicit substance). The absence of a prescribed opioid may be as significant as a positive finding because this may suggest compliance issues or diversion.

Prescription monitoring programs have been established by most states since 2002 through grants from the Department of Justice. PMPs store prescription drug information from pharmacies in a statewide database and develop algorithms that can detect behaviors suggesting opioid misuse.18 For example, an algorithm may track factors such as having 5 or more prescribers, 3 or more pharmacies, or 3 or more early refills within 1 year.19

Individual states administer PMPs differently, but prescribers generally can request information to monitor individual patients and detect illicit behaviors. Although relatively new, PMPs have been shown to reduce prescription sales,20 doctor shopping,19 and opioid analgesic misuse.21 A comprehensive list of state PMPs is available from the Alliance of States with Prescription Monitoring Programs (www.pmpalliance.org/content/pmp-access).


Key Point

Although relatively new, prescription monitoring programs have been shown to reduce doctor shopping and opioid analgesic misuse.

Responding to evidence of aberrant behavior

Even when you follow recommended opioid risk mitigation strategies, expect some individuals to show aberrant drug-taking behavior, abuse, or even the emergence of a co-occurring substance use disorder. Although evidence is limited regarding best practices in these circumstances, terminating opioid treatment is not necessarily the only option.8

Should you identify aberrant drug-related behaviors or any form of opioid analgesic misuse, evaluate the patient to determine the circumstances and immediately address the behavior. For example, using more medication than prescribed may be a sign of inadequately managed pain or clinical status, rather than an indication of abuse.

Referrals may be beneficial as part of your evaluation process. A pain specialist may offer alternative treatment approaches to mitigate medication overuse. An addiction specialist can evaluate patient safety for continued treatment with opioids, facilitate referrals for treatment of a substance use disorder, and provide consultation if discontinuing opioid therapy is appropriate.

Intervention. The patient’s pain complaint will persist whether or not you continue opioids, and substance abuse treatment may complement pain management. Even for an individual who continues opioid therapy, substance abuse treatment can provide tools for understanding and managing substance misuse. For instance, a cognitive-behavioral training program helped curb misuse and increase adherence in high-risk patients on opioid therapy for chronic back pain.22

Providing specialized care before you consider terminating opioid therapy allows people to address their reasons for misusing. Integrated treatment by a clinician specializing in co-occurring chronic pain and addiction may be particularly beneficial, as pain is an important motivator of individuals seeking treatment for an opioid use disorder.23

Termination. If, after additional resources and referral, an individual fails to make progress toward the therapeutic goal, you may need to terminate long-term opioid therapy. By making this decision, you may prevent the emergence of an opioid use disorder. Even so, telling someone that you are stopping opioid treatment can be a difficult discussion. The National Institute on Drug Abuse provides a wealth of online resources to assist with these and other opioid misuse conversations.24,25

Opioid detoxification is complex and should be managed and monitored to mitigate opioid withdrawal symptoms. Unfortunately, very little clinical guidance exists on effective opioid taper strategies for chronic pain patients. Consultation with an addiction specialist is recommended to assist with discontinuing treatment.

Future directions: A role for buprenorphine?

The introduction of transdermal buprenorphine in the United States in 2001 spurred new interest in this medication for treating moderate to severe chronic pain.26 Buprenorphine’s reported lower abuse potential may differentiate it from other opioid analgesics.27 Although a 2006 report showed evidence of modest diversion and abuse of buprenorphine,28 survey data and human laboratory studies demonstrate consistently that the abuse potential is lower—particularly with the combined buprenorphine/naloxone formulation—than with other opioids.29

Sublingual buprenorphine formulations, with and without naloxone, are FDA approved for opioid use disorder and opioid dependence, but not for pain. Thus, it is a medication with analgesic properties that is approved for an opioid use disorder. Some preliminary evidence supports off-label use of sublingual buprenorphine for chronic pain,30 but more research is needed before this approach can be recommended.

Additional clinical studies are examining whether the sublingual formulation’s efficacy for pain is comparable to other buprenorphine formulations. If this is supported, buprenorphine may become an appropriate, safer option for patients at risk of misusing who might benefit from continued opioid therapy.

 

 


Key Point

Some preliminary evidence supports off-label use of sublingual buprenorphine for chronic pain, but more research is needed.

Maintaining a rational, evidence-based approach

Opioid analgesic misuse is a serious public health problem. It would be unfortunate, however, if clinicians were to avoid medically appropriate opioid prescribing for people with chronic pain. Rational, evidence-based strategies to mitigate opioid misuse are the appropriate goal, accompanied by efforts to improve chronic pain treatment with and without opioids. To provide safe and effective opioid therapy, we urge you to develop a proactive approach informed by clinical guidelines, clinical experience, and the scientific literature.


Key Point

While opioid analgesic misuse is a serious problem, it would be unfortunate if clinicians avoided prescribing opioids for people in chronic pain.


Disclosure

Dr. Potter receives grant support from the National Institute on Drug Abuse K23 DA02297 (Potter) and U10 DA020024 (Trivedi) and serves as a consultant to Observant LLC. Ms. Marino reported no potential conflict of interest relevant to this article.

References

  1. Results from the 2011 National Survey on Drug Use and Health: summary of national findings. Rockville, MD: Substance Abuse and Mental Health Services Administration, 2012. Available at: http://www.samhsa.gov/data/NSDUH.aspx. Accessed December 26, 2012.
  2. Sehgal N, Manchikanti L, Smith HS. Prescription opioid abuse in chronic pain: a review of opioid abuse predictors and strategies to curb opioid abuse. Pain Physician. 2012;15(3 suppl):ES67–E92.
  3. Manchikanti L, Fellows B, Ailinani H, et al. Therapeutic use, abuse, and nonmedical use of opioids: a ten-year perspective. Pain Physician. 2010;13:401–435.
  4. Boudreau D, Von Korff M, Rutter CM, et al. Trends in long-term opioid therapy for chronic non-cancer pain. Pharmacoepidemiol Drug Saf. 2009;18:1166–1175.
  5. Birnbaum HG, White AG, Schiller M, et al. Societal costs of prescription opioid abuse, dependence, and misuse in the United States. Pain Med. 2011;12:657–667.
  6. Centers for Disease Control and Prevention. Emergency department visits involving nonmedical use of selected prescription drugs - United States, 2004-2008. MMWR Morb Mortal Wkly Rep. 2010;59:705–734.
  7. Centers for Disease Control and Prevention. Overdoses of prescription opioid pain relievers - United States, 1999-2008. MMWR Morb Mortal Wkly Rep. 2011;60:1487–1492.
  8. Chou R, Fanciullo GJ, Fine PG, et al. Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. J Pain. 2009;10:113–130.
  9. Martell BA, O’Connor PG, Kerns RD, et al. Systematic review: opioid treatment for chronic back pain: prevalence, efficacy, and association with addiction. Ann Intern Med. 2007;146:116–127.
  10. Butler SF, Budman SH, Fernandez KC, et al. Development and validation of the Current Opioid Misuse Measure. Pain. 2007;130:144–156.
  11. Katz NP, Adams EH, Chilcoat H, et al. Challenges in the development of prescription opioid abuse-deterrent formulations. Clin J Pain. 2007;23:648–660.
  12. Diagnostic and Statistical Manual of Mental Disorders, 4th ed., text rev. Washington, DC: American Psychiatric Association, 2000.
  13. Katz N, Fanciullo GJ. Role of urine toxicology testing in the management of chronic opioid therapy. Clin J Pain. 2002;18(4 suppl):S76–S82.
  14. Moore TM, Jones T, Browder JH, et al. A comparison of common screening methods for predicting aberrant drug-related behavior among patients receiving opioids for chronic pain management. Pain Med. 2009;10:1426–1433.
  15. World Health Organization. Cancer: WHO’s pain ladder. Available at: http://www.who.int/cancer/palliative/painladder/en. Accessed December 26, 2012.
  16. Fine PG, Portenoy RK. Establishing “best practices” for opioid rotation: conclusions of an expert panel. J Pain Symptom Manage. 2009;38:418–425.
  17. Ballantyne JC, Mao J. Opioid therapy for chronic pain. N Engl J Med. 2003;349:1943–1953.
  18. Worley J. Prescription drug monitoring programs, a response to doctor shopping: purpose, effectiveness, and directions for future research. Issues Ment Health Nurs. 2012;33:319–328.
  19. Katz N, Panas L, Kim M, et al. Usefulness of prescription monitoring programs for surveillance—analysis of Schedule II opioid prescription data in Massachusetts, 1996-2006. Pharmacoepidemiol Drug Saf. 2010;19:115–123.
  20. Simeone R, Holland L. An evaluation of prescription monitoring programs, September 1, 2006. Available at: https://www.bja.gov/publications/pdmpexecsumm.pdf. Accessed December 26, 2012.
  21. Wang J, Christo PJ. The influence of prescription monitoring programs on chronic pain management. Pain Physician. 2009;12:507–515.
  22. Jamison RN, Ross EL, Michna E, et al. Substance misuse treatment for high-risk chronic pain patients on opioid therapy: a randomized trial. Pain. 2010;150:390–400.
  23. Potter JS.  Co-occurring chronic pain and opioid addiction: is there a role for integrated treatment? Honolulu, HI: American Psychiatric Association Annual Meeting, 2011.
  24. National Institute on Drug Abuse. Talking to patients about sensitive topics: communication and screening techniques for increasing the reliability of patient self-report. Available at: http://www.drugabuse.gov/nidamed/centers-excellence/resources/talking-to-patients-about-sensitive-topics-communication-screening-techniques-increasing. Accessed January 10, 2013.
  25. National Institute on Drug Abuse. Managing pain patients who abuse prescription drugs. Available at: http://www.drugabuse.gov/nidamed/etools/managing-pain-patients-who-abuse-prescription-drugs. Accessed January 10, 2013.
  26. Pergolizzi J, Aloisi AM, Dahan A, et al. Current knowledge of buprenorphine and its unique pharmacological profile. Pain Pract. 2010;10:428–450.
  27. Park HS, Lee HY, Kim YH, et al. A highly selective kappa-opioid receptor agonist with low addictive potential and dependence liability. Bioorg Med Chem Lett. 2006;16:3609–3613.
  28. Substance Abuse and Mental Health Services Administration. Diversion and abuse of buprenorphine: a brief assessment of emerging indicators. Final report, 2006. Available at: http://buprenorphine.samhsa.gov. Accessed December 26, 2012.
  29. Comer SD, Sullivan MA, Vosburg SK, et al. Abuse liability of intravenous buprenorphine/naloxone and buprenorphine alone in buprenorphine-maintained intravenous heroin abusers. Addiction. 2010;105:709–718.
  30. Malinoff HL, Barkin RL, Wilson G. Sublingual buprenorphine is effective in the treatment of chronic pain syndrome. Am J Ther. 2005;12:379–384.
Publications
Publications
Topics
Article Type
Display Headline
How to avoid opioid misuse
Display Headline
How to avoid opioid misuse
Article Source

PURLs Copyright

Inside the Article

Do surgical residents need ethics teaching?

Article Type
Changed
Wed, 01/02/2019 - 08:25
Display Headline
Do surgical residents need ethics teaching?

Recently, I was invited to present surgical grand rounds on an ethics topic.  After the talk, a senior surgeon, now retired, who was in the audience confided to me that in all of his years of residency, he had never had a lecture on ethics.  This off-hand comment raised a question that demands consideration in the contemporary era of increasingly limited time available for teaching in surgical residencies.  Do we need ethics teaching in surgery residencies?

As someone who spent significant time in the last few years in ethics teaching activities, my reflex answer is “Yes.”  However, it is worthwhile to explore the reasons why, in the education of contemporary surgeons, it is important to focus dedicated attention on the ethical issues that arise in the practice of surgery.

Certainly, it is not true that ethics was previously unimportant in surgery.  In 1915 when the early organizers of the American College of Surgeons were first writing down the qualifications for membership, they emphasized the importance of ethics:  “The moral and ethical fitness of the candidates shall be determined by the reports of surgeons whose names are submitted by the candidate himself, and by such other reports and data as the Credentials Committee and the administration of the College may obtain.”[i]  Thus, the early founders of the College considered “ethical fitness” to be essential to their members.  Since there were clearly ethical and unethical ways to practice surgery, why has the focused emphasis on ethics teaching only occurred in recent decades?

I believe that there are three changes that have occurred in surgical care and surgical education that have led to the importance of this recent focus on ethics education in contemporary surgical training programs:  the limitations on work hours for surgical residents, the increasing shift to outpatient care, and the increasing number of options for surgical patients brought about by improvements in surgical technology. 

To begin with, surgical residents today spend significantly less time in the hospital every week than did surgical residents in years past.  Although one could debate the actual educational value the additional time that I and my surgical predecessors spent in the hospital, there is no question that the significant shortening of the amount of time that surgical residents spend with surgical faculty has resulted in fewer opportunities for learning through role modeling.  These many additional hours in the hospital for surgical residents in the past resulted in greater opportunities for residents to see how their faculty dealt with the challenges of managing ethically complex cases.  Although these interactions were not often thought of as “ethical role modeling” in prior years, there is no question that significant ethical teaching occurred in this informal curriculum.

Second, and closely related to the reduction in surgical resident work hours, has been the significant shift to outpatient surgical care.  This shift has meant that surgical residents whose time is focused on what happens in the hospital have even fewer opportunities to witness faculty engaging in many central aspects of the ethical care of surgical patients (e.g., obtaining informed consent for complex surgical procedures, communicating bad news to patients and families, or weighing risks and benefits of high risk elective surgical procedures).

Perhaps most importantly, today there are more options for surgical therapies than ever before.  The central question for a surgeon in 1913 when the American College of Surgeons was formed was, “What can be done for this patient?”  Today, in caring for the most complex and critically ill patients, the question that is foremost for surgeons is often “What should be done?”  This question is not a purely surgical question, but also an ethical question.  Consider a patient who has developed multisystem organ failure after complications from surgery.  Because of the advances in critical care, such a patient might be able to be kept alive with technologies such as mechanical ventilation, augmented cardiac output with a ventricular assist device, and hemodialysis.  These therapies cannot be judged to be appropriate or not without thoughtful consideration of an individual patient’s overall goals and values.   In such a case, weighing values and probabilities for success or failure relative to a particular patient’s goals moves beyond purely scientific surgical decision making into the realm of ethics.

For all of these reasons, I believe that although surgeons have practiced in an ethical fashion for countless generations, the contemporary education of surgeons should include focused attention on ethics and the ethical implications of the surgical interventions that we recommend for our patients.  Some might argue that the ultimate goal of surgical education should be to fully integrate the ethical considerations into the surgical care rendered to patients.  However, there is so much surgical science to be learned in residency, that in order for consideration of the ethical implications to not be lost, I believe that there must be dedicated attention to ethics teaching. 

 

 

Although it is an artificial separation to think about distinguishing the ethical considerations from the surgical decision making for a particular patient, the separation is valuable to emphasize the differences between surgical science and surgical ethics.  The former is dependent on anatomy, physiology, and surgical technique; whereas the latter is dependent on relationships, communication, and patient values.  Although we can train surgical residents to be excellent technicians with a focus purely on surgical science, we can only educate great doctors who are also surgeons by expanding the discussions of optimal surgical care to include the considerations central to surgical ethics.

 [1] This important historical background is courtesy of David Nahrwold, MD.

Dr. Angelos is an ACS Fellow, the Linda Kohler Anderson Professor of Surgery and Surgical Ethics, chief, endocrine surgery, and associate director of the MacLean Center for Clinical Medical Ethics at the University of Chicago.


Author and Disclosure Information

Publications
Sections
Author and Disclosure Information

Author and Disclosure Information

Recently, I was invited to present surgical grand rounds on an ethics topic.  After the talk, a senior surgeon, now retired, who was in the audience confided to me that in all of his years of residency, he had never had a lecture on ethics.  This off-hand comment raised a question that demands consideration in the contemporary era of increasingly limited time available for teaching in surgical residencies.  Do we need ethics teaching in surgery residencies?

As someone who spent significant time in the last few years in ethics teaching activities, my reflex answer is “Yes.”  However, it is worthwhile to explore the reasons why, in the education of contemporary surgeons, it is important to focus dedicated attention on the ethical issues that arise in the practice of surgery.

Certainly, it is not true that ethics was previously unimportant in surgery.  In 1915 when the early organizers of the American College of Surgeons were first writing down the qualifications for membership, they emphasized the importance of ethics:  “The moral and ethical fitness of the candidates shall be determined by the reports of surgeons whose names are submitted by the candidate himself, and by such other reports and data as the Credentials Committee and the administration of the College may obtain.”[i]  Thus, the early founders of the College considered “ethical fitness” to be essential to their members.  Since there were clearly ethical and unethical ways to practice surgery, why has the focused emphasis on ethics teaching only occurred in recent decades?

I believe that there are three changes that have occurred in surgical care and surgical education that have led to the importance of this recent focus on ethics education in contemporary surgical training programs:  the limitations on work hours for surgical residents, the increasing shift to outpatient care, and the increasing number of options for surgical patients brought about by improvements in surgical technology. 

To begin with, surgical residents today spend significantly less time in the hospital every week than did surgical residents in years past.  Although one could debate the actual educational value the additional time that I and my surgical predecessors spent in the hospital, there is no question that the significant shortening of the amount of time that surgical residents spend with surgical faculty has resulted in fewer opportunities for learning through role modeling.  These many additional hours in the hospital for surgical residents in the past resulted in greater opportunities for residents to see how their faculty dealt with the challenges of managing ethically complex cases.  Although these interactions were not often thought of as “ethical role modeling” in prior years, there is no question that significant ethical teaching occurred in this informal curriculum.

Second, and closely related to the reduction in surgical resident work hours, has been the significant shift to outpatient surgical care.  This shift has meant that surgical residents whose time is focused on what happens in the hospital have even fewer opportunities to witness faculty engaging in many central aspects of the ethical care of surgical patients (e.g., obtaining informed consent for complex surgical procedures, communicating bad news to patients and families, or weighing risks and benefits of high risk elective surgical procedures).

Perhaps most importantly, today there are more options for surgical therapies than ever before.  The central question for a surgeon in 1913 when the American College of Surgeons was formed was, “What can be done for this patient?”  Today, in caring for the most complex and critically ill patients, the question that is foremost for surgeons is often “What should be done?”  This question is not a purely surgical question, but also an ethical question.  Consider a patient who has developed multisystem organ failure after complications from surgery.  Because of the advances in critical care, such a patient might be able to be kept alive with technologies such as mechanical ventilation, augmented cardiac output with a ventricular assist device, and hemodialysis.  These therapies cannot be judged to be appropriate or not without thoughtful consideration of an individual patient’s overall goals and values.   In such a case, weighing values and probabilities for success or failure relative to a particular patient’s goals moves beyond purely scientific surgical decision making into the realm of ethics.

For all of these reasons, I believe that although surgeons have practiced in an ethical fashion for countless generations, the contemporary education of surgeons should include focused attention on ethics and the ethical implications of the surgical interventions that we recommend for our patients.  Some might argue that the ultimate goal of surgical education should be to fully integrate the ethical considerations into the surgical care rendered to patients.  However, there is so much surgical science to be learned in residency, that in order for consideration of the ethical implications to not be lost, I believe that there must be dedicated attention to ethics teaching. 

 

 

Although it is an artificial separation to think about distinguishing the ethical considerations from the surgical decision making for a particular patient, the separation is valuable to emphasize the differences between surgical science and surgical ethics.  The former is dependent on anatomy, physiology, and surgical technique; whereas the latter is dependent on relationships, communication, and patient values.  Although we can train surgical residents to be excellent technicians with a focus purely on surgical science, we can only educate great doctors who are also surgeons by expanding the discussions of optimal surgical care to include the considerations central to surgical ethics.

 [1] This important historical background is courtesy of David Nahrwold, MD.

Dr. Angelos is an ACS Fellow, the Linda Kohler Anderson Professor of Surgery and Surgical Ethics, chief, endocrine surgery, and associate director of the MacLean Center for Clinical Medical Ethics at the University of Chicago.


Recently, I was invited to present surgical grand rounds on an ethics topic.  After the talk, a senior surgeon, now retired, who was in the audience confided to me that in all of his years of residency, he had never had a lecture on ethics.  This off-hand comment raised a question that demands consideration in the contemporary era of increasingly limited time available for teaching in surgical residencies.  Do we need ethics teaching in surgery residencies?

As someone who spent significant time in the last few years in ethics teaching activities, my reflex answer is “Yes.”  However, it is worthwhile to explore the reasons why, in the education of contemporary surgeons, it is important to focus dedicated attention on the ethical issues that arise in the practice of surgery.

Certainly, it is not true that ethics was previously unimportant in surgery.  In 1915 when the early organizers of the American College of Surgeons were first writing down the qualifications for membership, they emphasized the importance of ethics:  “The moral and ethical fitness of the candidates shall be determined by the reports of surgeons whose names are submitted by the candidate himself, and by such other reports and data as the Credentials Committee and the administration of the College may obtain.”[i]  Thus, the early founders of the College considered “ethical fitness” to be essential to their members.  Since there were clearly ethical and unethical ways to practice surgery, why has the focused emphasis on ethics teaching only occurred in recent decades?

I believe that there are three changes that have occurred in surgical care and surgical education that have led to the importance of this recent focus on ethics education in contemporary surgical training programs:  the limitations on work hours for surgical residents, the increasing shift to outpatient care, and the increasing number of options for surgical patients brought about by improvements in surgical technology. 

To begin with, surgical residents today spend significantly less time in the hospital every week than did surgical residents in years past.  Although one could debate the actual educational value the additional time that I and my surgical predecessors spent in the hospital, there is no question that the significant shortening of the amount of time that surgical residents spend with surgical faculty has resulted in fewer opportunities for learning through role modeling.  These many additional hours in the hospital for surgical residents in the past resulted in greater opportunities for residents to see how their faculty dealt with the challenges of managing ethically complex cases.  Although these interactions were not often thought of as “ethical role modeling” in prior years, there is no question that significant ethical teaching occurred in this informal curriculum.

Second, and closely related to the reduction in surgical resident work hours, has been the significant shift to outpatient surgical care.  This shift has meant that surgical residents whose time is focused on what happens in the hospital have even fewer opportunities to witness faculty engaging in many central aspects of the ethical care of surgical patients (e.g., obtaining informed consent for complex surgical procedures, communicating bad news to patients and families, or weighing risks and benefits of high risk elective surgical procedures).

Perhaps most importantly, today there are more options for surgical therapies than ever before.  The central question for a surgeon in 1913 when the American College of Surgeons was formed was, “What can be done for this patient?”  Today, in caring for the most complex and critically ill patients, the question that is foremost for surgeons is often “What should be done?”  This question is not a purely surgical question, but also an ethical question.  Consider a patient who has developed multisystem organ failure after complications from surgery.  Because of the advances in critical care, such a patient might be able to be kept alive with technologies such as mechanical ventilation, augmented cardiac output with a ventricular assist device, and hemodialysis.  These therapies cannot be judged to be appropriate or not without thoughtful consideration of an individual patient’s overall goals and values.   In such a case, weighing values and probabilities for success or failure relative to a particular patient’s goals moves beyond purely scientific surgical decision making into the realm of ethics.

For all of these reasons, I believe that although surgeons have practiced in an ethical fashion for countless generations, the contemporary education of surgeons should include focused attention on ethics and the ethical implications of the surgical interventions that we recommend for our patients.  Some might argue that the ultimate goal of surgical education should be to fully integrate the ethical considerations into the surgical care rendered to patients.  However, there is so much surgical science to be learned in residency, that in order for consideration of the ethical implications to not be lost, I believe that there must be dedicated attention to ethics teaching. 

 

 

Although it is an artificial separation to think about distinguishing the ethical considerations from the surgical decision making for a particular patient, the separation is valuable to emphasize the differences between surgical science and surgical ethics.  The former is dependent on anatomy, physiology, and surgical technique; whereas the latter is dependent on relationships, communication, and patient values.  Although we can train surgical residents to be excellent technicians with a focus purely on surgical science, we can only educate great doctors who are also surgeons by expanding the discussions of optimal surgical care to include the considerations central to surgical ethics.

 [1] This important historical background is courtesy of David Nahrwold, MD.

Dr. Angelos is an ACS Fellow, the Linda Kohler Anderson Professor of Surgery and Surgical Ethics, chief, endocrine surgery, and associate director of the MacLean Center for Clinical Medical Ethics at the University of Chicago.


Publications
Publications
Article Type
Display Headline
Do surgical residents need ethics teaching?
Display Headline
Do surgical residents need ethics teaching?
Sections
Article Source

PURLs Copyright

Inside the Article

HM13: Bringing Hospital Medicine to the East Coast

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
HM13: Bringing Hospital Medicine to the East Coast

Every year, hospitalists from across the country come together at SHM’s annual meeting. With this year’s convention located just a few miles south of Washington, D.C., HM13 will be the most convenient meeting for thousands of hospitalists who live and work on or near the East Coast.

In fact, many won’t even need to board a plane to meet up with thousands of fellow hospitalists. For those in Boston, Baltimore, New York, Philadelphia, and other cities in the northeast corridor, the only meeting designed specifically for hospitalists is just a short drive or train ride away.

And when they arrive at the Gaylord National Resort and Convention Center in National Harbor, Md., hospitalists will find a truly expansive experience dedicated to the specialty. In addition to featuring four days of educational content, the meeting gives hospitalists the chance to catch up with old friends and network with new colleagues. Plus, with an exhibit floor filled with the nation’s top recruiters, service providers, and pharmaceutical innovators, hospitalists can get up to speed on the best offerings in the industry in a matter of hours.

Check out our 6-minute feature video: "Five Reasons You Should Attend HM13"

With so many offerings and opportunities in one place, the challenge for most hospitalists isn’t finding something to do—it’s planning their meeting schedule to achieve all of their career and educational goals.

This year, hospitalists can plan their meeting experience with HM13 at Hand, SHM’s mobile application. Now, any hospitalist with a tablet or smartphone and an Internet connection can add educational sessions to their HM13 plan, view paperless abstracts, and connect with other HM13 attendees in advance of the meeting.

For more HM13 information, visit www.hospitalmedicine2013.org.


Brendon Shank is SHM’s associate vice president of communications.

Issue
The Hospitalist - 2013(04)
Publications
Sections

Every year, hospitalists from across the country come together at SHM’s annual meeting. With this year’s convention located just a few miles south of Washington, D.C., HM13 will be the most convenient meeting for thousands of hospitalists who live and work on or near the East Coast.

In fact, many won’t even need to board a plane to meet up with thousands of fellow hospitalists. For those in Boston, Baltimore, New York, Philadelphia, and other cities in the northeast corridor, the only meeting designed specifically for hospitalists is just a short drive or train ride away.

And when they arrive at the Gaylord National Resort and Convention Center in National Harbor, Md., hospitalists will find a truly expansive experience dedicated to the specialty. In addition to featuring four days of educational content, the meeting gives hospitalists the chance to catch up with old friends and network with new colleagues. Plus, with an exhibit floor filled with the nation’s top recruiters, service providers, and pharmaceutical innovators, hospitalists can get up to speed on the best offerings in the industry in a matter of hours.

Check out our 6-minute feature video: "Five Reasons You Should Attend HM13"

With so many offerings and opportunities in one place, the challenge for most hospitalists isn’t finding something to do—it’s planning their meeting schedule to achieve all of their career and educational goals.

This year, hospitalists can plan their meeting experience with HM13 at Hand, SHM’s mobile application. Now, any hospitalist with a tablet or smartphone and an Internet connection can add educational sessions to their HM13 plan, view paperless abstracts, and connect with other HM13 attendees in advance of the meeting.

For more HM13 information, visit www.hospitalmedicine2013.org.


Brendon Shank is SHM’s associate vice president of communications.

Every year, hospitalists from across the country come together at SHM’s annual meeting. With this year’s convention located just a few miles south of Washington, D.C., HM13 will be the most convenient meeting for thousands of hospitalists who live and work on or near the East Coast.

In fact, many won’t even need to board a plane to meet up with thousands of fellow hospitalists. For those in Boston, Baltimore, New York, Philadelphia, and other cities in the northeast corridor, the only meeting designed specifically for hospitalists is just a short drive or train ride away.

And when they arrive at the Gaylord National Resort and Convention Center in National Harbor, Md., hospitalists will find a truly expansive experience dedicated to the specialty. In addition to featuring four days of educational content, the meeting gives hospitalists the chance to catch up with old friends and network with new colleagues. Plus, with an exhibit floor filled with the nation’s top recruiters, service providers, and pharmaceutical innovators, hospitalists can get up to speed on the best offerings in the industry in a matter of hours.

Check out our 6-minute feature video: "Five Reasons You Should Attend HM13"

With so many offerings and opportunities in one place, the challenge for most hospitalists isn’t finding something to do—it’s planning their meeting schedule to achieve all of their career and educational goals.

This year, hospitalists can plan their meeting experience with HM13 at Hand, SHM’s mobile application. Now, any hospitalist with a tablet or smartphone and an Internet connection can add educational sessions to their HM13 plan, view paperless abstracts, and connect with other HM13 attendees in advance of the meeting.

For more HM13 information, visit www.hospitalmedicine2013.org.


Brendon Shank is SHM’s associate vice president of communications.

Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
HM13: Bringing Hospital Medicine to the East Coast
Display Headline
HM13: Bringing Hospital Medicine to the East Coast
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

SHM Tallies Ratio of Hospital Respondents' Observation Admissions to Inpatient Admission Encounters

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
SHM Tallies Ratio of Hospital Respondents' Observation Admissions to Inpatient Admission Encounters

Johnbuck Creamer, MD

SHM added a new item to its 2012 State of Hospital Medicine report: the ratio of respondents’ observation admissions to inpatient admission encounters. This metric was added because observation encounters have been increasing, with financial effects on hospitals and patients. SHM survey respondents reported a 20% observation rate for both adult and pediatric practice groups (see Figure 1).

Johnbuck Creamer, MD
Figure 1. Ratio of Inpatient to Observation Admissions

Under observation status, services that used to be billed as inpatient status (e.g. chest pain evaluation, treatment of asthma exacerbation) must be billed by the hospital at much lower outpatient rates. Some hospitals have responded to this financial pressure by creating observation units or making other operational adjustments. One recent analysis suggested that nationwide adoption of such efforts could save billions of dollars.1

Becoming lean enough to do short work in short time, though, does not address all of the observation-related issues facing hospitals. When the Centers for Medicare & Medicaid Services’ (CMS) Recovery Audit Contractors (RACs) determine retrospectively that an inpatient admission should have been an observation encounter, the hospital’s payment is not downgraded but forfeited.2 This development has prompted hospitals to preemptively opt for observation status for certain patients. Case managers and providers increasingly are spending time reviewing inpatient versus observation status throughout a patient’s stay. Many hospitals have turned to third-party contractors to help review observation status.

Observation status has financial implications for patients as well. In the past year, USA Today, The Wall Street Journal, and CNN Money all have reported on patients hit with unexpected out-of-pocket expenses related to observation care.3,4,5 A common theme: Medicare patient hospitalized with an acute fracture, managed nonoperatively but requiring rehabilitation prior to returning home. These patients found out too late that observation, a status they were often unaware of, did not qualify for CMS’ three-day inpatient requirement to cover rehabilitation costs. Some patients were charged exorbitant prices for noncovered “outpatient” services, such as providing their routine medications.

Advocacy groups have joined the fray on patients’ behalf, and legal challenges have ensued. AARP and others are educating patients about observation status—and their right to challenge it. The Center for Medicare Advocacy (www.kslaw.com/Library/publication/HH111411_Bagnall.pdf) has filed a lawsuit against the U.S. Department of Health and Human Services on behalf of patients hit with uncovered rehabilitation costs, and the American Hospital Association has teamed with several hospitals to sue over funds forfeited in RAC audits (www.aha.org/content/12/121101-aha-hhs-medicare-com.pdf). Both houses of Congress have legislation (H.R. 1543 and S. 818) seeking to count observation days toward the Medicare three-day rule. For its part, CMS has promised to review observation status and, hopefully, clarify the rules.

Hospitalists, meanwhile, are gearing up for more observation care. The 2012 State of Hospital Medicine report shows that 37% of adult groups and 28% of pediatric groups reported having primary responsibility for observation or short-stay units. My own hospital runs both a clinical decision unit in the ED and a short-stay unit staffed by our hospitalist group. As SHM tracks observation status in future surveys, HM groups will be able to follow this phenomenon among their colleagues and benchmark their own rates of observation encounters.


Dr. Creamer is medical director of the short-stay unit at MetroHealth Medical Center in Cleveland and a member of SHM’s Practice Analysis Committee.

References

  1. Feng Z, Wright DB, Mor V. Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251-1259.
  2. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short-stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):2314-2323.
  3. Gengler A. The painful new trend in Medicare. CNN Money website. Available at: http://money.cnn.com/2012/08/07/pf/medicare-rehab-costs.moneymag/index.htm. Accessed March 6, 2013.
  4. Jaffe S. Patients held for observation can face steep drug bills. USA Today website. Available at: http://usatoday30.usatoday.com/money/industries/health/drugs/story/2012-04-30/drugs-can-be-expensive-in-observation-care/54646378/1. Accessed March 6, 2013.
  5. Landro L. Filling a gap between ERs and inpatient rooms. The Wall Street Journal website. Available at: http://online.wsj.com/article/SB10001424052970204349404578101060863887052.html. Accessed March 6, 2013.
Issue
The Hospitalist - 2013(04)
Publications
Sections

Johnbuck Creamer, MD

SHM added a new item to its 2012 State of Hospital Medicine report: the ratio of respondents’ observation admissions to inpatient admission encounters. This metric was added because observation encounters have been increasing, with financial effects on hospitals and patients. SHM survey respondents reported a 20% observation rate for both adult and pediatric practice groups (see Figure 1).

Johnbuck Creamer, MD
Figure 1. Ratio of Inpatient to Observation Admissions

Under observation status, services that used to be billed as inpatient status (e.g. chest pain evaluation, treatment of asthma exacerbation) must be billed by the hospital at much lower outpatient rates. Some hospitals have responded to this financial pressure by creating observation units or making other operational adjustments. One recent analysis suggested that nationwide adoption of such efforts could save billions of dollars.1

Becoming lean enough to do short work in short time, though, does not address all of the observation-related issues facing hospitals. When the Centers for Medicare & Medicaid Services’ (CMS) Recovery Audit Contractors (RACs) determine retrospectively that an inpatient admission should have been an observation encounter, the hospital’s payment is not downgraded but forfeited.2 This development has prompted hospitals to preemptively opt for observation status for certain patients. Case managers and providers increasingly are spending time reviewing inpatient versus observation status throughout a patient’s stay. Many hospitals have turned to third-party contractors to help review observation status.

Observation status has financial implications for patients as well. In the past year, USA Today, The Wall Street Journal, and CNN Money all have reported on patients hit with unexpected out-of-pocket expenses related to observation care.3,4,5 A common theme: Medicare patient hospitalized with an acute fracture, managed nonoperatively but requiring rehabilitation prior to returning home. These patients found out too late that observation, a status they were often unaware of, did not qualify for CMS’ three-day inpatient requirement to cover rehabilitation costs. Some patients were charged exorbitant prices for noncovered “outpatient” services, such as providing their routine medications.

Advocacy groups have joined the fray on patients’ behalf, and legal challenges have ensued. AARP and others are educating patients about observation status—and their right to challenge it. The Center for Medicare Advocacy (www.kslaw.com/Library/publication/HH111411_Bagnall.pdf) has filed a lawsuit against the U.S. Department of Health and Human Services on behalf of patients hit with uncovered rehabilitation costs, and the American Hospital Association has teamed with several hospitals to sue over funds forfeited in RAC audits (www.aha.org/content/12/121101-aha-hhs-medicare-com.pdf). Both houses of Congress have legislation (H.R. 1543 and S. 818) seeking to count observation days toward the Medicare three-day rule. For its part, CMS has promised to review observation status and, hopefully, clarify the rules.

Hospitalists, meanwhile, are gearing up for more observation care. The 2012 State of Hospital Medicine report shows that 37% of adult groups and 28% of pediatric groups reported having primary responsibility for observation or short-stay units. My own hospital runs both a clinical decision unit in the ED and a short-stay unit staffed by our hospitalist group. As SHM tracks observation status in future surveys, HM groups will be able to follow this phenomenon among their colleagues and benchmark their own rates of observation encounters.


Dr. Creamer is medical director of the short-stay unit at MetroHealth Medical Center in Cleveland and a member of SHM’s Practice Analysis Committee.

References

  1. Feng Z, Wright DB, Mor V. Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251-1259.
  2. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short-stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):2314-2323.
  3. Gengler A. The painful new trend in Medicare. CNN Money website. Available at: http://money.cnn.com/2012/08/07/pf/medicare-rehab-costs.moneymag/index.htm. Accessed March 6, 2013.
  4. Jaffe S. Patients held for observation can face steep drug bills. USA Today website. Available at: http://usatoday30.usatoday.com/money/industries/health/drugs/story/2012-04-30/drugs-can-be-expensive-in-observation-care/54646378/1. Accessed March 6, 2013.
  5. Landro L. Filling a gap between ERs and inpatient rooms. The Wall Street Journal website. Available at: http://online.wsj.com/article/SB10001424052970204349404578101060863887052.html. Accessed March 6, 2013.

Johnbuck Creamer, MD

SHM added a new item to its 2012 State of Hospital Medicine report: the ratio of respondents’ observation admissions to inpatient admission encounters. This metric was added because observation encounters have been increasing, with financial effects on hospitals and patients. SHM survey respondents reported a 20% observation rate for both adult and pediatric practice groups (see Figure 1).

Johnbuck Creamer, MD
Figure 1. Ratio of Inpatient to Observation Admissions

Under observation status, services that used to be billed as inpatient status (e.g. chest pain evaluation, treatment of asthma exacerbation) must be billed by the hospital at much lower outpatient rates. Some hospitals have responded to this financial pressure by creating observation units or making other operational adjustments. One recent analysis suggested that nationwide adoption of such efforts could save billions of dollars.1

Becoming lean enough to do short work in short time, though, does not address all of the observation-related issues facing hospitals. When the Centers for Medicare & Medicaid Services’ (CMS) Recovery Audit Contractors (RACs) determine retrospectively that an inpatient admission should have been an observation encounter, the hospital’s payment is not downgraded but forfeited.2 This development has prompted hospitals to preemptively opt for observation status for certain patients. Case managers and providers increasingly are spending time reviewing inpatient versus observation status throughout a patient’s stay. Many hospitals have turned to third-party contractors to help review observation status.

Observation status has financial implications for patients as well. In the past year, USA Today, The Wall Street Journal, and CNN Money all have reported on patients hit with unexpected out-of-pocket expenses related to observation care.3,4,5 A common theme: Medicare patient hospitalized with an acute fracture, managed nonoperatively but requiring rehabilitation prior to returning home. These patients found out too late that observation, a status they were often unaware of, did not qualify for CMS’ three-day inpatient requirement to cover rehabilitation costs. Some patients were charged exorbitant prices for noncovered “outpatient” services, such as providing their routine medications.

Advocacy groups have joined the fray on patients’ behalf, and legal challenges have ensued. AARP and others are educating patients about observation status—and their right to challenge it. The Center for Medicare Advocacy (www.kslaw.com/Library/publication/HH111411_Bagnall.pdf) has filed a lawsuit against the U.S. Department of Health and Human Services on behalf of patients hit with uncovered rehabilitation costs, and the American Hospital Association has teamed with several hospitals to sue over funds forfeited in RAC audits (www.aha.org/content/12/121101-aha-hhs-medicare-com.pdf). Both houses of Congress have legislation (H.R. 1543 and S. 818) seeking to count observation days toward the Medicare three-day rule. For its part, CMS has promised to review observation status and, hopefully, clarify the rules.

Hospitalists, meanwhile, are gearing up for more observation care. The 2012 State of Hospital Medicine report shows that 37% of adult groups and 28% of pediatric groups reported having primary responsibility for observation or short-stay units. My own hospital runs both a clinical decision unit in the ED and a short-stay unit staffed by our hospitalist group. As SHM tracks observation status in future surveys, HM groups will be able to follow this phenomenon among their colleagues and benchmark their own rates of observation encounters.


Dr. Creamer is medical director of the short-stay unit at MetroHealth Medical Center in Cleveland and a member of SHM’s Practice Analysis Committee.

References

  1. Feng Z, Wright DB, Mor V. Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251-1259.
  2. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short-stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):2314-2323.
  3. Gengler A. The painful new trend in Medicare. CNN Money website. Available at: http://money.cnn.com/2012/08/07/pf/medicare-rehab-costs.moneymag/index.htm. Accessed March 6, 2013.
  4. Jaffe S. Patients held for observation can face steep drug bills. USA Today website. Available at: http://usatoday30.usatoday.com/money/industries/health/drugs/story/2012-04-30/drugs-can-be-expensive-in-observation-care/54646378/1. Accessed March 6, 2013.
  5. Landro L. Filling a gap between ERs and inpatient rooms. The Wall Street Journal website. Available at: http://online.wsj.com/article/SB10001424052970204349404578101060863887052.html. Accessed March 6, 2013.
Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
SHM Tallies Ratio of Hospital Respondents' Observation Admissions to Inpatient Admission Encounters
Display Headline
SHM Tallies Ratio of Hospital Respondents' Observation Admissions to Inpatient Admission Encounters
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)