Itching

Article Type
Changed
Fri, 01/11/2019 - 18:24
Display Headline
Itching

My patients itch. Do yours?

This time of year, many of them say their backs itch, but the itch is not really their main concern. What worries them more is what the itch means. They know there are spots back there. They can feel them even if they can’t see them very well. Does the itch mean those spots are turning into something?

Sometimes those spots on their backs are moles. Sometimes they are seborrheic keratoses. But basically they’re all just innocent bystanders. Even if there does happen to be a superficial basal cell back there, any itch in the vicinity has nothing to do with any of the spots.

"Itching," I tell my patients, "is a sign that you are alive." After a short pause for mental processing, most of them smile. Being alive is good. Itch is your friend.

If they don’t smile and instead continue to look anguished, I sometimes freeze off some of their keratoses, just so they can feel reassured. You never know about those pesky growths. They’re benign today, but who knows about tomorrow? And they’re itchy, aren’t they? Doesn’t an itch mean something?

As far as I’m concerned, it doesn’t mean much, or at least not much about malignant transformation. Sometimes a cigar is just a cigar, and mostly an itch is just an itch. But to many of my patients, an itch is much more: Itch is change, itch is instability. Something is happening, something is changing, something is going on. Maybe one thing is turning into something else. Maybe it will.

Last week, I saw a thirtyish woman who wanted a skin check. One of her concerns was an itchy spot on her left shoulder. Lately, it had started to "move down" to her upper arm. As she admitted herself, there was absolutely nothing to be seen on the skin. She couldn’t possibly be worried about ...

Yes, she could. "This isn’t skin cancer, is it?" she asked. I assured her it was not. She seemed to believe me. I couldn’t remove anything anyway, because there was nothing to remove.

I don’t know where people get the idea that itch, especially when it applies to a mole or growth, means possible cancer. But wherever they get the idea, many of them certainly have it. They ask about it all the time. "I’m worried about that mole," they say.

"Do you think it’s changed, gotten larger or darker?"

"No, it looks the same. But now it itches."

People worry, not just about the itch, but about what happens when they scratch it. They’ve been warned since childhood not to scratch. Scratching can cause damage or infection. If what they’re scratching is a spot, then scratching can possibly turn the spot into ... don’t say it!

Of course, people complain about itching for a lot of reasons: They have eczema, or dry skin, or winter itch. Older folks have trouble sleeping because of itch. Office workers are embarrassed by itch – they have to leave meetings to keep their colleagues from twitching uncomfortably when they see them scratch. ("Like a monkey," is usually how they put it.) People who work in nursing homes or homeless shelters worry that they picked up a creepy-crawly from one of their clients. I once read that a king of England forbade commoners from scratching their itches, because scratching was so much fun that he wanted to reserve it for royalty. Couples married 7 years may get the itch. Treatises have been written about itching and scratching. I have not read them. Some things are better enjoyed than read about.

When the itch is accompanied by a visible rash – atopic eczema is the parade example – you treat the itch by treating the rash. When the patient has an itch but no rash other than scratch marks, it’s often best not just to treat the symptom, but to eliminate the worry that accompanies and exaggerates the symptom. No, the itch is not bugs. No, the itch is not liver disease. No, scratching will not cause damage, or you-know-what.

No, the itch is not cancer. There, I said it.

You itch. Itch is life. Celebrate!

Dr. Rockoff practices dermatology in Brookline, Mass. To respond to this column, e-mail him at our editorial offices at [email protected].

Author and Disclosure Information

Publications
Legacy Keywords
itchiness, skin cancer, moles, keratosis,
Sections
Author and Disclosure Information

Author and Disclosure Information

My patients itch. Do yours?

This time of year, many of them say their backs itch, but the itch is not really their main concern. What worries them more is what the itch means. They know there are spots back there. They can feel them even if they can’t see them very well. Does the itch mean those spots are turning into something?

Sometimes those spots on their backs are moles. Sometimes they are seborrheic keratoses. But basically they’re all just innocent bystanders. Even if there does happen to be a superficial basal cell back there, any itch in the vicinity has nothing to do with any of the spots.

"Itching," I tell my patients, "is a sign that you are alive." After a short pause for mental processing, most of them smile. Being alive is good. Itch is your friend.

If they don’t smile and instead continue to look anguished, I sometimes freeze off some of their keratoses, just so they can feel reassured. You never know about those pesky growths. They’re benign today, but who knows about tomorrow? And they’re itchy, aren’t they? Doesn’t an itch mean something?

As far as I’m concerned, it doesn’t mean much, or at least not much about malignant transformation. Sometimes a cigar is just a cigar, and mostly an itch is just an itch. But to many of my patients, an itch is much more: Itch is change, itch is instability. Something is happening, something is changing, something is going on. Maybe one thing is turning into something else. Maybe it will.

Last week, I saw a thirtyish woman who wanted a skin check. One of her concerns was an itchy spot on her left shoulder. Lately, it had started to "move down" to her upper arm. As she admitted herself, there was absolutely nothing to be seen on the skin. She couldn’t possibly be worried about ...

Yes, she could. "This isn’t skin cancer, is it?" she asked. I assured her it was not. She seemed to believe me. I couldn’t remove anything anyway, because there was nothing to remove.

I don’t know where people get the idea that itch, especially when it applies to a mole or growth, means possible cancer. But wherever they get the idea, many of them certainly have it. They ask about it all the time. "I’m worried about that mole," they say.

"Do you think it’s changed, gotten larger or darker?"

"No, it looks the same. But now it itches."

People worry, not just about the itch, but about what happens when they scratch it. They’ve been warned since childhood not to scratch. Scratching can cause damage or infection. If what they’re scratching is a spot, then scratching can possibly turn the spot into ... don’t say it!

Of course, people complain about itching for a lot of reasons: They have eczema, or dry skin, or winter itch. Older folks have trouble sleeping because of itch. Office workers are embarrassed by itch – they have to leave meetings to keep their colleagues from twitching uncomfortably when they see them scratch. ("Like a monkey," is usually how they put it.) People who work in nursing homes or homeless shelters worry that they picked up a creepy-crawly from one of their clients. I once read that a king of England forbade commoners from scratching their itches, because scratching was so much fun that he wanted to reserve it for royalty. Couples married 7 years may get the itch. Treatises have been written about itching and scratching. I have not read them. Some things are better enjoyed than read about.

When the itch is accompanied by a visible rash – atopic eczema is the parade example – you treat the itch by treating the rash. When the patient has an itch but no rash other than scratch marks, it’s often best not just to treat the symptom, but to eliminate the worry that accompanies and exaggerates the symptom. No, the itch is not bugs. No, the itch is not liver disease. No, scratching will not cause damage, or you-know-what.

No, the itch is not cancer. There, I said it.

You itch. Itch is life. Celebrate!

Dr. Rockoff practices dermatology in Brookline, Mass. To respond to this column, e-mail him at our editorial offices at [email protected].

My patients itch. Do yours?

This time of year, many of them say their backs itch, but the itch is not really their main concern. What worries them more is what the itch means. They know there are spots back there. They can feel them even if they can’t see them very well. Does the itch mean those spots are turning into something?

Sometimes those spots on their backs are moles. Sometimes they are seborrheic keratoses. But basically they’re all just innocent bystanders. Even if there does happen to be a superficial basal cell back there, any itch in the vicinity has nothing to do with any of the spots.

"Itching," I tell my patients, "is a sign that you are alive." After a short pause for mental processing, most of them smile. Being alive is good. Itch is your friend.

If they don’t smile and instead continue to look anguished, I sometimes freeze off some of their keratoses, just so they can feel reassured. You never know about those pesky growths. They’re benign today, but who knows about tomorrow? And they’re itchy, aren’t they? Doesn’t an itch mean something?

As far as I’m concerned, it doesn’t mean much, or at least not much about malignant transformation. Sometimes a cigar is just a cigar, and mostly an itch is just an itch. But to many of my patients, an itch is much more: Itch is change, itch is instability. Something is happening, something is changing, something is going on. Maybe one thing is turning into something else. Maybe it will.

Last week, I saw a thirtyish woman who wanted a skin check. One of her concerns was an itchy spot on her left shoulder. Lately, it had started to "move down" to her upper arm. As she admitted herself, there was absolutely nothing to be seen on the skin. She couldn’t possibly be worried about ...

Yes, she could. "This isn’t skin cancer, is it?" she asked. I assured her it was not. She seemed to believe me. I couldn’t remove anything anyway, because there was nothing to remove.

I don’t know where people get the idea that itch, especially when it applies to a mole or growth, means possible cancer. But wherever they get the idea, many of them certainly have it. They ask about it all the time. "I’m worried about that mole," they say.

"Do you think it’s changed, gotten larger or darker?"

"No, it looks the same. But now it itches."

People worry, not just about the itch, but about what happens when they scratch it. They’ve been warned since childhood not to scratch. Scratching can cause damage or infection. If what they’re scratching is a spot, then scratching can possibly turn the spot into ... don’t say it!

Of course, people complain about itching for a lot of reasons: They have eczema, or dry skin, or winter itch. Older folks have trouble sleeping because of itch. Office workers are embarrassed by itch – they have to leave meetings to keep their colleagues from twitching uncomfortably when they see them scratch. ("Like a monkey," is usually how they put it.) People who work in nursing homes or homeless shelters worry that they picked up a creepy-crawly from one of their clients. I once read that a king of England forbade commoners from scratching their itches, because scratching was so much fun that he wanted to reserve it for royalty. Couples married 7 years may get the itch. Treatises have been written about itching and scratching. I have not read them. Some things are better enjoyed than read about.

When the itch is accompanied by a visible rash – atopic eczema is the parade example – you treat the itch by treating the rash. When the patient has an itch but no rash other than scratch marks, it’s often best not just to treat the symptom, but to eliminate the worry that accompanies and exaggerates the symptom. No, the itch is not bugs. No, the itch is not liver disease. No, scratching will not cause damage, or you-know-what.

No, the itch is not cancer. There, I said it.

You itch. Itch is life. Celebrate!

Dr. Rockoff practices dermatology in Brookline, Mass. To respond to this column, e-mail him at our editorial offices at [email protected].

Publications
Publications
Article Type
Display Headline
Itching
Display Headline
Itching
Legacy Keywords
itchiness, skin cancer, moles, keratosis,
Legacy Keywords
itchiness, skin cancer, moles, keratosis,
Sections
Article Source

PURLs Copyright

Inside the Article

Implementing Peer Evaluation of Handoffs

Article Type
Changed
Mon, 05/22/2017 - 18:18
Display Headline
Implementing Peer Evaluation of Handoffs: Associations With Experience and Workload

The advent of restricted residency duty hours has thrust the safety risks of handoffs into the spotlight. More recently, the Accreditation Council of Graduate Medical Education (ACGME) has restricted hours even further to a maximum of 16 hours for first‐year residents and up to 28 hours for residents beyond their first year.[1] Although the focus on these mandates has been scheduling and staffing in residency programs, another important area of attention is for handoff education and evaluation. The Common Program Requirements for the ACGME state that all residency programs should ensure that residents are competent in handoff communications and that programs should monitor handoffs to ensure that they are safe.[2] Moreover, recent efforts have defined milestones for handoffs, specifically that by 12 months, residents should be able to effectively communicate with other caregivers to maintain continuity during transitions of care.[3] Although more detailed handoff‐specific milestones have to be flushed out, a need for evaluation instruments to assess milestones is critical. In addition, handoffs continue to represent a vulnerable time for patients in many specialties, such as surgery and pediatrics.[4, 5]

Evaluating handoffs poses specific challenges for internal medicine residency programs because handoffs are often conducted on the fly or wherever convenient, and not always at a dedicated time and place.[6] Even when evaluations could be conducted at a dedicated time and place, program faculty and leadership may not be comfortable evaluating handoffs in real time due to lack of faculty development and recent experience with handoffs. Although supervising faculty may be in the most ideal position due to their intimate knowledge of the patient and their ability to evaluate the clinical judgment of trainees, they may face additional pressures of supervision and direct patient care that prevent their attendance at the time of the handoff. For these reasons, potential people to evaluate the quality of a resident handoff may be the peers to whom they frequently handoff. Because handoffs are also conceptualized as an interactive dialogue between sender and receiver, an ideal handoff performance evaluation would capture both of these roles.[7] For these reasons, peer evaluation may be a viable modality to assist programs in evaluating handoffs. Peer evaluation has been shown to be an effective method of rating performance of medical students,[8] practicing physicians,[9] and residents.[10] Moreover, peer evaluation is now a required feature in assessing internal medicine resident performance.[11] Although enthusiasm for peer evaluation has grown in residency training, the use of it can still be limited by a variety of problems, such as reluctance to rate peers poorly, difficulty obtaining evaluations, and the utility of such evaluations. For these reasons, it is important to understand whether peer evaluation of handoffs is feasible. Therefore, the aim of this study was to assess feasibility of an online peer evaluation survey tool of handoffs in an internal medicine residency and to characterize performance over time as well and associations between workload and performance.

METHODS

From July 2009 to March 2010, all interns on the general medicine inpatient service at 2 hospitals were asked to complete an end‐of‐month anonymous peer evaluation that included 14‐items addressing all core competencies. The evaluation tool was administered electronically using New Innovations (New Innovations, Inc., Uniontown, OH). Interns signed out to each other in a cross‐cover circuit that included 3 other interns on an every fourth night call cycle.[12] Call teams included 1 resident and 1 intern who worked from 7 am on the on‐call day to noon on the postcall day. Therefore, postcall interns were expected to hand off to the next on‐call intern before noon. Although attendings and senior residents were not required to formally supervise the handoff, supervising senior residents were often present during postcall intern sign‐out to facilitate departure of the team. When interns were not postcall, they were expected to sign out before they went to the clinic in the afternoon or when their foreseeable work was complete. The interns were provided with a 45‐minute lecture on handoffs and introduced to the peer evaluation tool in July 2009 at an intern orientation. They were also prompted to complete the tool to the best of their ability after their general medicine rotation. We chose the general medicine rotation because each intern completed approximately 2 months of general medicine in their first year. This would provide ratings over time without overburdening interns to complete 3 additional evaluations after every inpatient rotation.

The peer evaluation was constructed to correspond to specific ACGME core competencies and was also linked to specific handoff behaviors that were known to be effective. The questions were adapted from prior items used in a validated direct‐observation tool previously developed by the authors (the Handoff Clinical Evaluation Exercise), which was based on literature review as well as expert opinion.[13, 14] For example, under the core competency of communication, interns were asked to rate each other on communication skills using the anchors of No questions, no acknowledgement of to do tasks, transfer of information face to face is not a priority for low unsatisfactory (1) and Appropriate use of questions, acknowledgement and read‐back of to‐do and priority tasks, face to face communication a priority for high superior (9). Items that referred to behaviors related to both giving handoff and receiving handoff were used to capture the interactive dialogue between senders and receivers that characterize ideal handoffs. In addition, specific items referring to written sign‐out and verbal sign‐out were developed to capture the specific differences. For instance, for the patient care competency in written sign‐out, low unsatisfactory (1) was defined as Incomplete written content; to do's omitted or requested with no rationale or plan, or with inadequate preparation (ie, request to transfuse but consent not obtained), and high superior (9) was defined as Content is complete with to do's accompanied by clear plan of action and rationale. Pilot testing with trainees was conducted, including residents not involved in the study and clinical students. The tool was also reviewed by the residency program leadership, and in an effort to standardize the reporting of the items with our other evaluation forms, each item was mapped to a core competency that it was most related to. Debriefing of the instrument experience following usage was performed with 3 residents who had an interest in medical education and handoff performance.

The tool was deployed to interns following a brief educational session for interns, in which the tool was previewed and reviewed. Interns were counseled to use the form as a global performance assessment over the course of the month, in contrast to an episodic evaluation. This would also avoid the use of negative event bias by raters, in which the rater allows a single negative event to influence the perception of the person's performance, even long after the event has passed into history.

To analyze the data, descriptive statistics were used to summarize mean performance across domains. To assess whether intern performance improved over time, we split the academic year into 3 time periods of 3 months each, which we have used in earlier studies assessing intern experience.[15] Prior to analysis, postcall interns were identified by using the intern monthly call schedule located in the AMiON software program (Norwich, VT) to label the evaluation of the postcall intern. Then, all names were removed and replaced with a unique identifier for the evaluator and the evaluatee. In addition, each evaluation was also categorized as either having come from the main teaching hospital or the community hospital affiliate.

Multivariate random effects linear regression models, controlling for evaluator, evaluatee, and hospital, were used to assess the association between time (using indicator variables for season) and postcall status on intern performance. In addition, because of the skewness in the ratings, we also undertook additional analysis by transforming our data into dichotomous variables reflecting superior performance. After conducting conditional ordinal logistic regression, the main findings did not change. We also investigated within‐subject and between‐subject variation using intraclass correlation coefficients. Within‐subject intraclass correlation enabled assessment of inter‐rater reliability. Between‐subject intraclass correlation enabled the assessment of evaluator effects. Evaluator effects can encompass a variety of forms of rater bias such as leniency (in which evaluators tended to rate individuals uniformly positively), severity (rater tends to significantly avoid using positive ratings), or the halo effect (the individual being evaluated has 1 significantly positive attribute that overrides that which is being evaluated). All analyses were completed using STATA 10.0 (StataCorp, College Station, TX) with statistical significance defined as P < 0.05. This study was deemed to be exempt from institutional review board review after all data were deidentified prior to analysis.

RESULTS

From July 2009 to March 2010, 31 interns (78%) returned 60% (172/288) of the peer evaluations they received. Almost all (39/40, 98%) interns were evaluated at least once with a median of 4 ratings per intern (range, 19). Thirty‐five percent of ratings occurred when an intern was rotating at the community hospital. Ratings were very high on all domains (mean, 8.38.6). Overall sign‐out performance was rated as 8.4 (95% confidence interval [CI], 8.3‐8.5), with over 55% rating peers as 9 (maximal score). The lowest score given was 5. Individual items ranged from a low of 8.34 (95% CI, 8.21‐8.47) for updating written sign‐outs, to a high of 8.60 (95% CI, 8.50‐8.69) for collegiality (Table 1) The internal consistency of the instrument was calculated using all items and was very high, with a Cronbach = 0.98.

Mean Intern Ratings on Sign‐out Peer Evaluation by Item and Competency
ACGME Core CompetencyRoleItemsItemMean95% CIRange% Receiving 9 as Rating
  • NOTE: Abbreviations: ACGME, Accreditation Council of Graduate Medical Education; CI, confidence interval.

Patient careSenderWritten sign‐outQ18.348.25 to 8.486953.2
SenderUpdated contentQ28.358.22 to 8.475954.4
ReceiverDocumentation of overnight eventsQ68.418.30 to 8.526956.3
Medical knowledgeSenderAnticipatory guidanceQ38.408.28 to 8.516956.3
ReceiverClinical decision making during cross‐coverQ78.458.35 to 8.556956.0
ProfessionalismSenderCollegialityQ48.608.51 to 8.686965.7
ReceiverAcknowledgement of professional responsibilityQ108.538.43 to 8.626962.4
ReceiverTimeliness/responsivenessQ118.508.39 to 8.606961.9
Interpersonal and communication skillsReceiverListening behavior when receiving sign‐outsQ88.528.42 to 8.626963.6
ReceiverCommunication when receiving sign‐outQ98.528.43 to 8.626963.0
Systems‐based practiceReceiverResource useQ128.458.35 to 8.556955.6
Practice‐based learning and improvementSenderAccepting of feedbackQ58.458.34 to 8.556958.7
OverallBothOverall sign‐out qualityQ138.448.34 to 8.546955.3

Mean ratings for each item increased in season 2 and 3 and were statistically significant using a test for trend across ordered groups. However, in multivariate regression models, improvements remained statistically significant for only 4 items (Figure 1): 1) communication skills, 2) listening behavior, 3) accepting professional responsibility, and 4) accessing the system (Table 2). Specifically, when compared to season 1, improvements in communication skill were seen in season 2 (+0.34 [95% CI, 0.08‐0.60], P = 0.009) and were sustained in season 3 (+0.34 [95% CI, 0.06‐0.61], P = 0.018). A similar pattern was observed for listening behavior, with improvement in ratings that were similar in magnitude with increasing intern experience (season 2, +0.29 [95% CI, 0.04‐0.55], P = 0.025 compared to season 1). Although accessing the system scores showed a similar pattern of improvement with an increase in season 2 compared to season 1, the magnitude of this change was smaller (season 2, +0.21 [95% CI, 0.03‐0.39], P = 0.023). Interestingly, improvements in accepting professional responsibility rose during season 2, but the difference did not reach statistical significance until season 3 (+0.37 [95% CI, 0.08‐0.65], P = 0.012 compared to season 1).

Figure 1
Graph showing improvements over time in performance in domains of sign‐out performance by season, where season 1 is July to September, season 2 is October to December, and season 3 is January to March. Results are obtained from random effects linear regression models controlling for evaluator, evaluate, postcall status, and site (community vs tertiary).
Increasing Scores on Peer Handoff Evaluation by Season
 Outcome
 Coefficient (95% CI)
PredictorCommunication SkillsListening BehaviorProfessional ResponsibilityAccessing the SystemWritten Sign‐out Quality
  • NOTE: Results are from multivariable linear regression models examining the association between season, community hospital, postcall status controlling for subject (evaluatee) random effects, and evaluator fixed effects (evaluator and evaluate effects not shown). Abbreviations: CI, confidence interval. *P < 0.05.

Season 1RefRefRefRefRef
Season 20.29 (0.04 to 0.55)a0.34 (0.08 to 0.60)a0.24 (0.03 to 0.51)0.21 (0.03 to 0.39)a0.05 (0.25 to 0.15)
Season 30.29 (0.02 to 0.56)a0.34 (0.06 to 0.61)a0.37 (0.08 to 0.65)a0.18 (0.01 to 0.36)a0.08 (0.13 to 0.30)
Community hospital0.18 (0.00 to 0.37)0.23 (0.04 to 0.43)a0.06 (0.13 to 0.26)0.13 (0.00 to 0.25)0.24 (0.08 to 0.39)a
Postcall0.10 (0.25 to 0.05)0.04 (0.21 to 0.13)0.02 (0.18 to 0.13)0.05 (0.16 to 0.05)0.18 (0.31,0.05)a
Constant7.04 (6.51 to 7.58)6.81 (6.23 to 7.38)7.04 (6.50 to 7.60)7.02 (6.59 to 7.45)6.49 (6.04 to 6.94)

In addition to increasing experience, postcall interns were rated significantly lower than nonpostcall interns in 2 items: 1) written sign‐out quality (8.21 vs 8.39, P = 0.008) and 2) accepting feedback (practice‐based learning and improvement) (8.25 vs 8.42, P = 0.006). Interestingly, when interns were at the community hospital general medicine rotation, where overall census was much lower than at the teaching hospital, peer ratings were significantly higher for overall handoff performance and 7 (written sign‐out, update content, collegiality, accepting feedback, documentation of overnight events, clinical decision making during cross‐cover, and listening behavior) of the remaining 12 specific handoff domains (P < 0.05 for all, data not shown).

Last, significant evaluator effects were observed, which contributed to the variance in ratings given. For example, using intraclass correlation coefficients (ICC), we found that there was greater within‐intern variation than between‐intern variation, highlighting that evaluator scores tended to be strongly correlated with each other (eg, ICC overall performance = 0.64) and more so than scores of multiple evaluations of the same intern (eg, ICC overall performance = 0.18).

Because ratings of handoff performance were skewed, we also conducted a sensitivity analysis using ordinal logistic regression to ascertain if our findings remained significant. Using ordinal logistic regression models, significant improvements were seen in season 3 for 3 of the above‐listed behaviors, specifically listening behavior, professional responsibility, and accessing the system. Although there was no improvement in communication, there was an improvement observed in collegiality scores that were significant in season 3.

DISCUSSION

Using an end‐of‐rotation online peer assessment of handoff skills, it is feasible to obtain ratings of intern handoff performance from peers. Although there is evidence of rater bias toward leniency and low inter‐rater reliability, peer ratings of intern performance did increase over time. In addition, peer ratings were lower for interns who were handing off their postcall service. Working on a rotation at a community affiliate with a lower census was associated with higher peer ratings of handoffs.

It is worth considering the mechanism of these findings. First, the leniency observed in peer ratings likely reflects peers unwilling to critique each other due to a desire for an esprit de corps among their classmates. The low intraclass correlation coefficient for ratings of the same intern highlight that peers do not easily converge on their ratings of the same intern. Nevertheless, the ratings on the peer evaluation did demonstrate improvements over time. This improvement could easily reflect on‐the‐job learning, as interns become more acquainted with their roles and efficient and competent in their tasks. Together, these data provide a foundation for developing milestone handoffs that reflect the natural progression of intern competence in handoffs. For example, communication appeared to improve at 3 months, whereas transfer of professional responsibility improved at 6 months after beginning internship. However, alternative explanations are also important to consider. Although it is easy and somewhat reassuring to assume that increases over time reflect a learning effect, it is also possible that interns are unwilling to critique their peers as familiarity with them increases.

There are several reasons why postcall interns could have been universally rated lower than nonpostcall interns. First, postcall interns likely had the sickest patients with the most to‐do tasks or work associated with their sign‐out because they were handing off newly admitted patients. Because the postcall sign‐out is associated with the highest workload, it may be that interns perceive that a good handoff is nothing to do, and handoffs associated with more work are not highly rated. It is also important to note that postcall interns, who in this study were at the end of a 30‐hour duty shift, were also most fatigued and overworked, which may have also affected the handoff, especially in the 2 domains of interest. Due to the time pressure to leave coupled with fatigue, they may have had less time to invest in written sign‐out quality and may not have been receptive to feedback on their performance. Likewise, performance on handoffs was rated higher when at the community hospital, which could be due to several reasons. The most plausible explanation is that the workload associated with that sign‐out is less due to lower patient census and lower patient acuity. In the community hospital, fewer residents were also geographically co‐located on a quieter ward and work room area, which may contribute to higher ratings across domains.

This study also has implications for future efforts to improve and evaluate handoff performance in residency trainees. For example, our findings suggest the importance of enhancing supervision and training for handoffs during high workload rotations or certain times of the year. In addition, evaluation systems for handoff performance that rely solely on peer evaluation will not likely yield an accurate picture of handoff performance, difficulty obtaining peer evaluations, the halo effect, and other forms of evaluator bias in ratings. Accurate handoff evaluation may require direct observation of verbal communication and faculty audit of written sign‐outs.[16, 17] Moreover, methods such as appreciative inquiry can help identify the peers with the best practices to emulate.[18] Future efforts to validate peer assessment of handoffs against these other assessment methods, such as direct observation by service attendings, are needed.

There are limitations to this study. First, although we have limited our findings to 1 residency program with 1 type of rotation, we have already expanded to a community residency program that used a float system and have disseminated our tool to several other institutions. In addition, we have a small number of participants, and our 60% return rate on monthly peer evaluations raises concerns of nonresponse bias. For example, a peer who perceived the handoff performance of an intern to be poor may be less likely to return the evaluation. Because our dataset has been deidentified per institutional review board request, we do not have any information to differentiate systematic reasons for not responding to the evaluation. Anecdotally, a critique of the tool is that it is lengthy, especially in light of the fact that 1 intern completes 3 additional handoff evaluations. It is worth understanding why the instrument had such a high internal consistency. Although the items were designed to address different competencies initially, peers may make a global assessment about someone's ability to perform a handoff and then fill out the evaluation accordingly. This speaks to the difficulty in evaluating the subcomponents of various actions related to the handoff. Because of the high internal consistency, we were able to shorten the survey to a 5‐item instrument with a Cronbach of 0.93, which we are currently using in our program and have disseminated to other programs. Although it is currently unclear if the ratings of performance on the longer peer evaluation are valid, we are investigating concurrent validity of the shorter tool by comparing peer evaluations to other measures of handoff quality as part of our current work. Last, we are only able to test associations and not make causal inferences.

CONCLUSION

Peer assessment of handoff skills is feasible via an electronic competency‐based tool. Although there is evidence of score inflation, intern performance does increase over time and is associated with various aspects of workload, such as postcall status or working on a rotation at a community affiliate with a lower census. Together, these data can provide a foundation for developing milestones handoffs that reflect the natural progression of intern competence in handoffs.

Acknowledgments

The authors thank the University of Chicago Medicine residents and chief residents, the members of the Curriculum and Housestaff Evaluation Committee, Tyrece Hunter and Amy Ice‐Gibson, and Meryl Prochaska and Laura Ruth Venable for assistance with manuscript preparation.

Disclosures

This study was funded by the University of Chicago Department of Medicine Clinical Excellence and Medical Education Award and AHRQ R03 5R03HS018278‐02 Development of and Validation of a Tool to Evaluate Hand‐off Quality.

Files
References
  1. Nasca TJ, Day SH, Amis ES; the ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010; 363.
  2. Common program requirements. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed December 10, 2012.
  3. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: developmental milestones for internal medicine residency training. J Grad Med Educ. 2009;1(1):520.
  4. Greenberg CC, Regenbogen SE, Studdert DM, et al. Patterns of communication breakdowns resulting in injury to surgical patients. J Am Coll Surg. 2007;204(4):533540.
  5. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50(1):5763.
  6. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  7. Gibson SC, Ham JJ, Apker J, Mallak LA, Johnson NA. Communication, communication, communication: the art of the handoff. Ann Emerg Med. 2010;55(2):181183.
  8. Arnold L, Willouby L, Calkins V, Gammon L, Eberhardt G. Use of peer evaluation in the assessment of medical students. J Med Educ. 1981;56:3542.
  9. Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269:16551660.
  10. Thomas PA, Gebo KA, Hellmann DB. A pilot study of peer review in residency training. J Gen Intern Med. 1999;14(9):551554.
  11. ACGME Program Requirements for Graduate Medical Education in Internal Medicine Effective July 1, 2009. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_07012009.pdf. Accessed December 10, 2012.
  12. Arora V, Dunphy C, Chang VY, Ahmad F, Humphrey HJ, Meltzer D. The effects of on‐duty napping on intern sleep time and fatigue. Ann Intern Med. 2006;144(11):792798.
  13. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  14. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365‐2702.2012.04131.x.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  17. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  18. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
Article PDF
Issue
Journal of Hospital Medicine - 8(3)
Page Number
132-136
Sections
Files
Files
Article PDF
Article PDF

The advent of restricted residency duty hours has thrust the safety risks of handoffs into the spotlight. More recently, the Accreditation Council of Graduate Medical Education (ACGME) has restricted hours even further to a maximum of 16 hours for first‐year residents and up to 28 hours for residents beyond their first year.[1] Although the focus on these mandates has been scheduling and staffing in residency programs, another important area of attention is for handoff education and evaluation. The Common Program Requirements for the ACGME state that all residency programs should ensure that residents are competent in handoff communications and that programs should monitor handoffs to ensure that they are safe.[2] Moreover, recent efforts have defined milestones for handoffs, specifically that by 12 months, residents should be able to effectively communicate with other caregivers to maintain continuity during transitions of care.[3] Although more detailed handoff‐specific milestones have to be flushed out, a need for evaluation instruments to assess milestones is critical. In addition, handoffs continue to represent a vulnerable time for patients in many specialties, such as surgery and pediatrics.[4, 5]

Evaluating handoffs poses specific challenges for internal medicine residency programs because handoffs are often conducted on the fly or wherever convenient, and not always at a dedicated time and place.[6] Even when evaluations could be conducted at a dedicated time and place, program faculty and leadership may not be comfortable evaluating handoffs in real time due to lack of faculty development and recent experience with handoffs. Although supervising faculty may be in the most ideal position due to their intimate knowledge of the patient and their ability to evaluate the clinical judgment of trainees, they may face additional pressures of supervision and direct patient care that prevent their attendance at the time of the handoff. For these reasons, potential people to evaluate the quality of a resident handoff may be the peers to whom they frequently handoff. Because handoffs are also conceptualized as an interactive dialogue between sender and receiver, an ideal handoff performance evaluation would capture both of these roles.[7] For these reasons, peer evaluation may be a viable modality to assist programs in evaluating handoffs. Peer evaluation has been shown to be an effective method of rating performance of medical students,[8] practicing physicians,[9] and residents.[10] Moreover, peer evaluation is now a required feature in assessing internal medicine resident performance.[11] Although enthusiasm for peer evaluation has grown in residency training, the use of it can still be limited by a variety of problems, such as reluctance to rate peers poorly, difficulty obtaining evaluations, and the utility of such evaluations. For these reasons, it is important to understand whether peer evaluation of handoffs is feasible. Therefore, the aim of this study was to assess feasibility of an online peer evaluation survey tool of handoffs in an internal medicine residency and to characterize performance over time as well and associations between workload and performance.

METHODS

From July 2009 to March 2010, all interns on the general medicine inpatient service at 2 hospitals were asked to complete an end‐of‐month anonymous peer evaluation that included 14‐items addressing all core competencies. The evaluation tool was administered electronically using New Innovations (New Innovations, Inc., Uniontown, OH). Interns signed out to each other in a cross‐cover circuit that included 3 other interns on an every fourth night call cycle.[12] Call teams included 1 resident and 1 intern who worked from 7 am on the on‐call day to noon on the postcall day. Therefore, postcall interns were expected to hand off to the next on‐call intern before noon. Although attendings and senior residents were not required to formally supervise the handoff, supervising senior residents were often present during postcall intern sign‐out to facilitate departure of the team. When interns were not postcall, they were expected to sign out before they went to the clinic in the afternoon or when their foreseeable work was complete. The interns were provided with a 45‐minute lecture on handoffs and introduced to the peer evaluation tool in July 2009 at an intern orientation. They were also prompted to complete the tool to the best of their ability after their general medicine rotation. We chose the general medicine rotation because each intern completed approximately 2 months of general medicine in their first year. This would provide ratings over time without overburdening interns to complete 3 additional evaluations after every inpatient rotation.

The peer evaluation was constructed to correspond to specific ACGME core competencies and was also linked to specific handoff behaviors that were known to be effective. The questions were adapted from prior items used in a validated direct‐observation tool previously developed by the authors (the Handoff Clinical Evaluation Exercise), which was based on literature review as well as expert opinion.[13, 14] For example, under the core competency of communication, interns were asked to rate each other on communication skills using the anchors of No questions, no acknowledgement of to do tasks, transfer of information face to face is not a priority for low unsatisfactory (1) and Appropriate use of questions, acknowledgement and read‐back of to‐do and priority tasks, face to face communication a priority for high superior (9). Items that referred to behaviors related to both giving handoff and receiving handoff were used to capture the interactive dialogue between senders and receivers that characterize ideal handoffs. In addition, specific items referring to written sign‐out and verbal sign‐out were developed to capture the specific differences. For instance, for the patient care competency in written sign‐out, low unsatisfactory (1) was defined as Incomplete written content; to do's omitted or requested with no rationale or plan, or with inadequate preparation (ie, request to transfuse but consent not obtained), and high superior (9) was defined as Content is complete with to do's accompanied by clear plan of action and rationale. Pilot testing with trainees was conducted, including residents not involved in the study and clinical students. The tool was also reviewed by the residency program leadership, and in an effort to standardize the reporting of the items with our other evaluation forms, each item was mapped to a core competency that it was most related to. Debriefing of the instrument experience following usage was performed with 3 residents who had an interest in medical education and handoff performance.

The tool was deployed to interns following a brief educational session for interns, in which the tool was previewed and reviewed. Interns were counseled to use the form as a global performance assessment over the course of the month, in contrast to an episodic evaluation. This would also avoid the use of negative event bias by raters, in which the rater allows a single negative event to influence the perception of the person's performance, even long after the event has passed into history.

To analyze the data, descriptive statistics were used to summarize mean performance across domains. To assess whether intern performance improved over time, we split the academic year into 3 time periods of 3 months each, which we have used in earlier studies assessing intern experience.[15] Prior to analysis, postcall interns were identified by using the intern monthly call schedule located in the AMiON software program (Norwich, VT) to label the evaluation of the postcall intern. Then, all names were removed and replaced with a unique identifier for the evaluator and the evaluatee. In addition, each evaluation was also categorized as either having come from the main teaching hospital or the community hospital affiliate.

Multivariate random effects linear regression models, controlling for evaluator, evaluatee, and hospital, were used to assess the association between time (using indicator variables for season) and postcall status on intern performance. In addition, because of the skewness in the ratings, we also undertook additional analysis by transforming our data into dichotomous variables reflecting superior performance. After conducting conditional ordinal logistic regression, the main findings did not change. We also investigated within‐subject and between‐subject variation using intraclass correlation coefficients. Within‐subject intraclass correlation enabled assessment of inter‐rater reliability. Between‐subject intraclass correlation enabled the assessment of evaluator effects. Evaluator effects can encompass a variety of forms of rater bias such as leniency (in which evaluators tended to rate individuals uniformly positively), severity (rater tends to significantly avoid using positive ratings), or the halo effect (the individual being evaluated has 1 significantly positive attribute that overrides that which is being evaluated). All analyses were completed using STATA 10.0 (StataCorp, College Station, TX) with statistical significance defined as P < 0.05. This study was deemed to be exempt from institutional review board review after all data were deidentified prior to analysis.

RESULTS

From July 2009 to March 2010, 31 interns (78%) returned 60% (172/288) of the peer evaluations they received. Almost all (39/40, 98%) interns were evaluated at least once with a median of 4 ratings per intern (range, 19). Thirty‐five percent of ratings occurred when an intern was rotating at the community hospital. Ratings were very high on all domains (mean, 8.38.6). Overall sign‐out performance was rated as 8.4 (95% confidence interval [CI], 8.3‐8.5), with over 55% rating peers as 9 (maximal score). The lowest score given was 5. Individual items ranged from a low of 8.34 (95% CI, 8.21‐8.47) for updating written sign‐outs, to a high of 8.60 (95% CI, 8.50‐8.69) for collegiality (Table 1) The internal consistency of the instrument was calculated using all items and was very high, with a Cronbach = 0.98.

Mean Intern Ratings on Sign‐out Peer Evaluation by Item and Competency
ACGME Core CompetencyRoleItemsItemMean95% CIRange% Receiving 9 as Rating
  • NOTE: Abbreviations: ACGME, Accreditation Council of Graduate Medical Education; CI, confidence interval.

Patient careSenderWritten sign‐outQ18.348.25 to 8.486953.2
SenderUpdated contentQ28.358.22 to 8.475954.4
ReceiverDocumentation of overnight eventsQ68.418.30 to 8.526956.3
Medical knowledgeSenderAnticipatory guidanceQ38.408.28 to 8.516956.3
ReceiverClinical decision making during cross‐coverQ78.458.35 to 8.556956.0
ProfessionalismSenderCollegialityQ48.608.51 to 8.686965.7
ReceiverAcknowledgement of professional responsibilityQ108.538.43 to 8.626962.4
ReceiverTimeliness/responsivenessQ118.508.39 to 8.606961.9
Interpersonal and communication skillsReceiverListening behavior when receiving sign‐outsQ88.528.42 to 8.626963.6
ReceiverCommunication when receiving sign‐outQ98.528.43 to 8.626963.0
Systems‐based practiceReceiverResource useQ128.458.35 to 8.556955.6
Practice‐based learning and improvementSenderAccepting of feedbackQ58.458.34 to 8.556958.7
OverallBothOverall sign‐out qualityQ138.448.34 to 8.546955.3

Mean ratings for each item increased in season 2 and 3 and were statistically significant using a test for trend across ordered groups. However, in multivariate regression models, improvements remained statistically significant for only 4 items (Figure 1): 1) communication skills, 2) listening behavior, 3) accepting professional responsibility, and 4) accessing the system (Table 2). Specifically, when compared to season 1, improvements in communication skill were seen in season 2 (+0.34 [95% CI, 0.08‐0.60], P = 0.009) and were sustained in season 3 (+0.34 [95% CI, 0.06‐0.61], P = 0.018). A similar pattern was observed for listening behavior, with improvement in ratings that were similar in magnitude with increasing intern experience (season 2, +0.29 [95% CI, 0.04‐0.55], P = 0.025 compared to season 1). Although accessing the system scores showed a similar pattern of improvement with an increase in season 2 compared to season 1, the magnitude of this change was smaller (season 2, +0.21 [95% CI, 0.03‐0.39], P = 0.023). Interestingly, improvements in accepting professional responsibility rose during season 2, but the difference did not reach statistical significance until season 3 (+0.37 [95% CI, 0.08‐0.65], P = 0.012 compared to season 1).

Figure 1
Graph showing improvements over time in performance in domains of sign‐out performance by season, where season 1 is July to September, season 2 is October to December, and season 3 is January to March. Results are obtained from random effects linear regression models controlling for evaluator, evaluate, postcall status, and site (community vs tertiary).
Increasing Scores on Peer Handoff Evaluation by Season
 Outcome
 Coefficient (95% CI)
PredictorCommunication SkillsListening BehaviorProfessional ResponsibilityAccessing the SystemWritten Sign‐out Quality
  • NOTE: Results are from multivariable linear regression models examining the association between season, community hospital, postcall status controlling for subject (evaluatee) random effects, and evaluator fixed effects (evaluator and evaluate effects not shown). Abbreviations: CI, confidence interval. *P < 0.05.

Season 1RefRefRefRefRef
Season 20.29 (0.04 to 0.55)a0.34 (0.08 to 0.60)a0.24 (0.03 to 0.51)0.21 (0.03 to 0.39)a0.05 (0.25 to 0.15)
Season 30.29 (0.02 to 0.56)a0.34 (0.06 to 0.61)a0.37 (0.08 to 0.65)a0.18 (0.01 to 0.36)a0.08 (0.13 to 0.30)
Community hospital0.18 (0.00 to 0.37)0.23 (0.04 to 0.43)a0.06 (0.13 to 0.26)0.13 (0.00 to 0.25)0.24 (0.08 to 0.39)a
Postcall0.10 (0.25 to 0.05)0.04 (0.21 to 0.13)0.02 (0.18 to 0.13)0.05 (0.16 to 0.05)0.18 (0.31,0.05)a
Constant7.04 (6.51 to 7.58)6.81 (6.23 to 7.38)7.04 (6.50 to 7.60)7.02 (6.59 to 7.45)6.49 (6.04 to 6.94)

In addition to increasing experience, postcall interns were rated significantly lower than nonpostcall interns in 2 items: 1) written sign‐out quality (8.21 vs 8.39, P = 0.008) and 2) accepting feedback (practice‐based learning and improvement) (8.25 vs 8.42, P = 0.006). Interestingly, when interns were at the community hospital general medicine rotation, where overall census was much lower than at the teaching hospital, peer ratings were significantly higher for overall handoff performance and 7 (written sign‐out, update content, collegiality, accepting feedback, documentation of overnight events, clinical decision making during cross‐cover, and listening behavior) of the remaining 12 specific handoff domains (P < 0.05 for all, data not shown).

Last, significant evaluator effects were observed, which contributed to the variance in ratings given. For example, using intraclass correlation coefficients (ICC), we found that there was greater within‐intern variation than between‐intern variation, highlighting that evaluator scores tended to be strongly correlated with each other (eg, ICC overall performance = 0.64) and more so than scores of multiple evaluations of the same intern (eg, ICC overall performance = 0.18).

Because ratings of handoff performance were skewed, we also conducted a sensitivity analysis using ordinal logistic regression to ascertain if our findings remained significant. Using ordinal logistic regression models, significant improvements were seen in season 3 for 3 of the above‐listed behaviors, specifically listening behavior, professional responsibility, and accessing the system. Although there was no improvement in communication, there was an improvement observed in collegiality scores that were significant in season 3.

DISCUSSION

Using an end‐of‐rotation online peer assessment of handoff skills, it is feasible to obtain ratings of intern handoff performance from peers. Although there is evidence of rater bias toward leniency and low inter‐rater reliability, peer ratings of intern performance did increase over time. In addition, peer ratings were lower for interns who were handing off their postcall service. Working on a rotation at a community affiliate with a lower census was associated with higher peer ratings of handoffs.

It is worth considering the mechanism of these findings. First, the leniency observed in peer ratings likely reflects peers unwilling to critique each other due to a desire for an esprit de corps among their classmates. The low intraclass correlation coefficient for ratings of the same intern highlight that peers do not easily converge on their ratings of the same intern. Nevertheless, the ratings on the peer evaluation did demonstrate improvements over time. This improvement could easily reflect on‐the‐job learning, as interns become more acquainted with their roles and efficient and competent in their tasks. Together, these data provide a foundation for developing milestone handoffs that reflect the natural progression of intern competence in handoffs. For example, communication appeared to improve at 3 months, whereas transfer of professional responsibility improved at 6 months after beginning internship. However, alternative explanations are also important to consider. Although it is easy and somewhat reassuring to assume that increases over time reflect a learning effect, it is also possible that interns are unwilling to critique their peers as familiarity with them increases.

There are several reasons why postcall interns could have been universally rated lower than nonpostcall interns. First, postcall interns likely had the sickest patients with the most to‐do tasks or work associated with their sign‐out because they were handing off newly admitted patients. Because the postcall sign‐out is associated with the highest workload, it may be that interns perceive that a good handoff is nothing to do, and handoffs associated with more work are not highly rated. It is also important to note that postcall interns, who in this study were at the end of a 30‐hour duty shift, were also most fatigued and overworked, which may have also affected the handoff, especially in the 2 domains of interest. Due to the time pressure to leave coupled with fatigue, they may have had less time to invest in written sign‐out quality and may not have been receptive to feedback on their performance. Likewise, performance on handoffs was rated higher when at the community hospital, which could be due to several reasons. The most plausible explanation is that the workload associated with that sign‐out is less due to lower patient census and lower patient acuity. In the community hospital, fewer residents were also geographically co‐located on a quieter ward and work room area, which may contribute to higher ratings across domains.

This study also has implications for future efforts to improve and evaluate handoff performance in residency trainees. For example, our findings suggest the importance of enhancing supervision and training for handoffs during high workload rotations or certain times of the year. In addition, evaluation systems for handoff performance that rely solely on peer evaluation will not likely yield an accurate picture of handoff performance, difficulty obtaining peer evaluations, the halo effect, and other forms of evaluator bias in ratings. Accurate handoff evaluation may require direct observation of verbal communication and faculty audit of written sign‐outs.[16, 17] Moreover, methods such as appreciative inquiry can help identify the peers with the best practices to emulate.[18] Future efforts to validate peer assessment of handoffs against these other assessment methods, such as direct observation by service attendings, are needed.

There are limitations to this study. First, although we have limited our findings to 1 residency program with 1 type of rotation, we have already expanded to a community residency program that used a float system and have disseminated our tool to several other institutions. In addition, we have a small number of participants, and our 60% return rate on monthly peer evaluations raises concerns of nonresponse bias. For example, a peer who perceived the handoff performance of an intern to be poor may be less likely to return the evaluation. Because our dataset has been deidentified per institutional review board request, we do not have any information to differentiate systematic reasons for not responding to the evaluation. Anecdotally, a critique of the tool is that it is lengthy, especially in light of the fact that 1 intern completes 3 additional handoff evaluations. It is worth understanding why the instrument had such a high internal consistency. Although the items were designed to address different competencies initially, peers may make a global assessment about someone's ability to perform a handoff and then fill out the evaluation accordingly. This speaks to the difficulty in evaluating the subcomponents of various actions related to the handoff. Because of the high internal consistency, we were able to shorten the survey to a 5‐item instrument with a Cronbach of 0.93, which we are currently using in our program and have disseminated to other programs. Although it is currently unclear if the ratings of performance on the longer peer evaluation are valid, we are investigating concurrent validity of the shorter tool by comparing peer evaluations to other measures of handoff quality as part of our current work. Last, we are only able to test associations and not make causal inferences.

CONCLUSION

Peer assessment of handoff skills is feasible via an electronic competency‐based tool. Although there is evidence of score inflation, intern performance does increase over time and is associated with various aspects of workload, such as postcall status or working on a rotation at a community affiliate with a lower census. Together, these data can provide a foundation for developing milestones handoffs that reflect the natural progression of intern competence in handoffs.

Acknowledgments

The authors thank the University of Chicago Medicine residents and chief residents, the members of the Curriculum and Housestaff Evaluation Committee, Tyrece Hunter and Amy Ice‐Gibson, and Meryl Prochaska and Laura Ruth Venable for assistance with manuscript preparation.

Disclosures

This study was funded by the University of Chicago Department of Medicine Clinical Excellence and Medical Education Award and AHRQ R03 5R03HS018278‐02 Development of and Validation of a Tool to Evaluate Hand‐off Quality.

The advent of restricted residency duty hours has thrust the safety risks of handoffs into the spotlight. More recently, the Accreditation Council of Graduate Medical Education (ACGME) has restricted hours even further to a maximum of 16 hours for first‐year residents and up to 28 hours for residents beyond their first year.[1] Although the focus on these mandates has been scheduling and staffing in residency programs, another important area of attention is for handoff education and evaluation. The Common Program Requirements for the ACGME state that all residency programs should ensure that residents are competent in handoff communications and that programs should monitor handoffs to ensure that they are safe.[2] Moreover, recent efforts have defined milestones for handoffs, specifically that by 12 months, residents should be able to effectively communicate with other caregivers to maintain continuity during transitions of care.[3] Although more detailed handoff‐specific milestones have to be flushed out, a need for evaluation instruments to assess milestones is critical. In addition, handoffs continue to represent a vulnerable time for patients in many specialties, such as surgery and pediatrics.[4, 5]

Evaluating handoffs poses specific challenges for internal medicine residency programs because handoffs are often conducted on the fly or wherever convenient, and not always at a dedicated time and place.[6] Even when evaluations could be conducted at a dedicated time and place, program faculty and leadership may not be comfortable evaluating handoffs in real time due to lack of faculty development and recent experience with handoffs. Although supervising faculty may be in the most ideal position due to their intimate knowledge of the patient and their ability to evaluate the clinical judgment of trainees, they may face additional pressures of supervision and direct patient care that prevent their attendance at the time of the handoff. For these reasons, potential people to evaluate the quality of a resident handoff may be the peers to whom they frequently handoff. Because handoffs are also conceptualized as an interactive dialogue between sender and receiver, an ideal handoff performance evaluation would capture both of these roles.[7] For these reasons, peer evaluation may be a viable modality to assist programs in evaluating handoffs. Peer evaluation has been shown to be an effective method of rating performance of medical students,[8] practicing physicians,[9] and residents.[10] Moreover, peer evaluation is now a required feature in assessing internal medicine resident performance.[11] Although enthusiasm for peer evaluation has grown in residency training, the use of it can still be limited by a variety of problems, such as reluctance to rate peers poorly, difficulty obtaining evaluations, and the utility of such evaluations. For these reasons, it is important to understand whether peer evaluation of handoffs is feasible. Therefore, the aim of this study was to assess feasibility of an online peer evaluation survey tool of handoffs in an internal medicine residency and to characterize performance over time as well and associations between workload and performance.

METHODS

From July 2009 to March 2010, all interns on the general medicine inpatient service at 2 hospitals were asked to complete an end‐of‐month anonymous peer evaluation that included 14‐items addressing all core competencies. The evaluation tool was administered electronically using New Innovations (New Innovations, Inc., Uniontown, OH). Interns signed out to each other in a cross‐cover circuit that included 3 other interns on an every fourth night call cycle.[12] Call teams included 1 resident and 1 intern who worked from 7 am on the on‐call day to noon on the postcall day. Therefore, postcall interns were expected to hand off to the next on‐call intern before noon. Although attendings and senior residents were not required to formally supervise the handoff, supervising senior residents were often present during postcall intern sign‐out to facilitate departure of the team. When interns were not postcall, they were expected to sign out before they went to the clinic in the afternoon or when their foreseeable work was complete. The interns were provided with a 45‐minute lecture on handoffs and introduced to the peer evaluation tool in July 2009 at an intern orientation. They were also prompted to complete the tool to the best of their ability after their general medicine rotation. We chose the general medicine rotation because each intern completed approximately 2 months of general medicine in their first year. This would provide ratings over time without overburdening interns to complete 3 additional evaluations after every inpatient rotation.

The peer evaluation was constructed to correspond to specific ACGME core competencies and was also linked to specific handoff behaviors that were known to be effective. The questions were adapted from prior items used in a validated direct‐observation tool previously developed by the authors (the Handoff Clinical Evaluation Exercise), which was based on literature review as well as expert opinion.[13, 14] For example, under the core competency of communication, interns were asked to rate each other on communication skills using the anchors of No questions, no acknowledgement of to do tasks, transfer of information face to face is not a priority for low unsatisfactory (1) and Appropriate use of questions, acknowledgement and read‐back of to‐do and priority tasks, face to face communication a priority for high superior (9). Items that referred to behaviors related to both giving handoff and receiving handoff were used to capture the interactive dialogue between senders and receivers that characterize ideal handoffs. In addition, specific items referring to written sign‐out and verbal sign‐out were developed to capture the specific differences. For instance, for the patient care competency in written sign‐out, low unsatisfactory (1) was defined as Incomplete written content; to do's omitted or requested with no rationale or plan, or with inadequate preparation (ie, request to transfuse but consent not obtained), and high superior (9) was defined as Content is complete with to do's accompanied by clear plan of action and rationale. Pilot testing with trainees was conducted, including residents not involved in the study and clinical students. The tool was also reviewed by the residency program leadership, and in an effort to standardize the reporting of the items with our other evaluation forms, each item was mapped to a core competency that it was most related to. Debriefing of the instrument experience following usage was performed with 3 residents who had an interest in medical education and handoff performance.

The tool was deployed to interns following a brief educational session for interns, in which the tool was previewed and reviewed. Interns were counseled to use the form as a global performance assessment over the course of the month, in contrast to an episodic evaluation. This would also avoid the use of negative event bias by raters, in which the rater allows a single negative event to influence the perception of the person's performance, even long after the event has passed into history.

To analyze the data, descriptive statistics were used to summarize mean performance across domains. To assess whether intern performance improved over time, we split the academic year into 3 time periods of 3 months each, which we have used in earlier studies assessing intern experience.[15] Prior to analysis, postcall interns were identified by using the intern monthly call schedule located in the AMiON software program (Norwich, VT) to label the evaluation of the postcall intern. Then, all names were removed and replaced with a unique identifier for the evaluator and the evaluatee. In addition, each evaluation was also categorized as either having come from the main teaching hospital or the community hospital affiliate.

Multivariate random effects linear regression models, controlling for evaluator, evaluatee, and hospital, were used to assess the association between time (using indicator variables for season) and postcall status on intern performance. In addition, because of the skewness in the ratings, we also undertook additional analysis by transforming our data into dichotomous variables reflecting superior performance. After conducting conditional ordinal logistic regression, the main findings did not change. We also investigated within‐subject and between‐subject variation using intraclass correlation coefficients. Within‐subject intraclass correlation enabled assessment of inter‐rater reliability. Between‐subject intraclass correlation enabled the assessment of evaluator effects. Evaluator effects can encompass a variety of forms of rater bias such as leniency (in which evaluators tended to rate individuals uniformly positively), severity (rater tends to significantly avoid using positive ratings), or the halo effect (the individual being evaluated has 1 significantly positive attribute that overrides that which is being evaluated). All analyses were completed using STATA 10.0 (StataCorp, College Station, TX) with statistical significance defined as P < 0.05. This study was deemed to be exempt from institutional review board review after all data were deidentified prior to analysis.

RESULTS

From July 2009 to March 2010, 31 interns (78%) returned 60% (172/288) of the peer evaluations they received. Almost all (39/40, 98%) interns were evaluated at least once with a median of 4 ratings per intern (range, 19). Thirty‐five percent of ratings occurred when an intern was rotating at the community hospital. Ratings were very high on all domains (mean, 8.38.6). Overall sign‐out performance was rated as 8.4 (95% confidence interval [CI], 8.3‐8.5), with over 55% rating peers as 9 (maximal score). The lowest score given was 5. Individual items ranged from a low of 8.34 (95% CI, 8.21‐8.47) for updating written sign‐outs, to a high of 8.60 (95% CI, 8.50‐8.69) for collegiality (Table 1) The internal consistency of the instrument was calculated using all items and was very high, with a Cronbach = 0.98.

Mean Intern Ratings on Sign‐out Peer Evaluation by Item and Competency
ACGME Core CompetencyRoleItemsItemMean95% CIRange% Receiving 9 as Rating
  • NOTE: Abbreviations: ACGME, Accreditation Council of Graduate Medical Education; CI, confidence interval.

Patient careSenderWritten sign‐outQ18.348.25 to 8.486953.2
SenderUpdated contentQ28.358.22 to 8.475954.4
ReceiverDocumentation of overnight eventsQ68.418.30 to 8.526956.3
Medical knowledgeSenderAnticipatory guidanceQ38.408.28 to 8.516956.3
ReceiverClinical decision making during cross‐coverQ78.458.35 to 8.556956.0
ProfessionalismSenderCollegialityQ48.608.51 to 8.686965.7
ReceiverAcknowledgement of professional responsibilityQ108.538.43 to 8.626962.4
ReceiverTimeliness/responsivenessQ118.508.39 to 8.606961.9
Interpersonal and communication skillsReceiverListening behavior when receiving sign‐outsQ88.528.42 to 8.626963.6
ReceiverCommunication when receiving sign‐outQ98.528.43 to 8.626963.0
Systems‐based practiceReceiverResource useQ128.458.35 to 8.556955.6
Practice‐based learning and improvementSenderAccepting of feedbackQ58.458.34 to 8.556958.7
OverallBothOverall sign‐out qualityQ138.448.34 to 8.546955.3

Mean ratings for each item increased in season 2 and 3 and were statistically significant using a test for trend across ordered groups. However, in multivariate regression models, improvements remained statistically significant for only 4 items (Figure 1): 1) communication skills, 2) listening behavior, 3) accepting professional responsibility, and 4) accessing the system (Table 2). Specifically, when compared to season 1, improvements in communication skill were seen in season 2 (+0.34 [95% CI, 0.08‐0.60], P = 0.009) and were sustained in season 3 (+0.34 [95% CI, 0.06‐0.61], P = 0.018). A similar pattern was observed for listening behavior, with improvement in ratings that were similar in magnitude with increasing intern experience (season 2, +0.29 [95% CI, 0.04‐0.55], P = 0.025 compared to season 1). Although accessing the system scores showed a similar pattern of improvement with an increase in season 2 compared to season 1, the magnitude of this change was smaller (season 2, +0.21 [95% CI, 0.03‐0.39], P = 0.023). Interestingly, improvements in accepting professional responsibility rose during season 2, but the difference did not reach statistical significance until season 3 (+0.37 [95% CI, 0.08‐0.65], P = 0.012 compared to season 1).

Figure 1
Graph showing improvements over time in performance in domains of sign‐out performance by season, where season 1 is July to September, season 2 is October to December, and season 3 is January to March. Results are obtained from random effects linear regression models controlling for evaluator, evaluate, postcall status, and site (community vs tertiary).
Increasing Scores on Peer Handoff Evaluation by Season
 Outcome
 Coefficient (95% CI)
PredictorCommunication SkillsListening BehaviorProfessional ResponsibilityAccessing the SystemWritten Sign‐out Quality
  • NOTE: Results are from multivariable linear regression models examining the association between season, community hospital, postcall status controlling for subject (evaluatee) random effects, and evaluator fixed effects (evaluator and evaluate effects not shown). Abbreviations: CI, confidence interval. *P < 0.05.

Season 1RefRefRefRefRef
Season 20.29 (0.04 to 0.55)a0.34 (0.08 to 0.60)a0.24 (0.03 to 0.51)0.21 (0.03 to 0.39)a0.05 (0.25 to 0.15)
Season 30.29 (0.02 to 0.56)a0.34 (0.06 to 0.61)a0.37 (0.08 to 0.65)a0.18 (0.01 to 0.36)a0.08 (0.13 to 0.30)
Community hospital0.18 (0.00 to 0.37)0.23 (0.04 to 0.43)a0.06 (0.13 to 0.26)0.13 (0.00 to 0.25)0.24 (0.08 to 0.39)a
Postcall0.10 (0.25 to 0.05)0.04 (0.21 to 0.13)0.02 (0.18 to 0.13)0.05 (0.16 to 0.05)0.18 (0.31,0.05)a
Constant7.04 (6.51 to 7.58)6.81 (6.23 to 7.38)7.04 (6.50 to 7.60)7.02 (6.59 to 7.45)6.49 (6.04 to 6.94)

In addition to increasing experience, postcall interns were rated significantly lower than nonpostcall interns in 2 items: 1) written sign‐out quality (8.21 vs 8.39, P = 0.008) and 2) accepting feedback (practice‐based learning and improvement) (8.25 vs 8.42, P = 0.006). Interestingly, when interns were at the community hospital general medicine rotation, where overall census was much lower than at the teaching hospital, peer ratings were significantly higher for overall handoff performance and 7 (written sign‐out, update content, collegiality, accepting feedback, documentation of overnight events, clinical decision making during cross‐cover, and listening behavior) of the remaining 12 specific handoff domains (P < 0.05 for all, data not shown).

Last, significant evaluator effects were observed, which contributed to the variance in ratings given. For example, using intraclass correlation coefficients (ICC), we found that there was greater within‐intern variation than between‐intern variation, highlighting that evaluator scores tended to be strongly correlated with each other (eg, ICC overall performance = 0.64) and more so than scores of multiple evaluations of the same intern (eg, ICC overall performance = 0.18).

Because ratings of handoff performance were skewed, we also conducted a sensitivity analysis using ordinal logistic regression to ascertain if our findings remained significant. Using ordinal logistic regression models, significant improvements were seen in season 3 for 3 of the above‐listed behaviors, specifically listening behavior, professional responsibility, and accessing the system. Although there was no improvement in communication, there was an improvement observed in collegiality scores that were significant in season 3.

DISCUSSION

Using an end‐of‐rotation online peer assessment of handoff skills, it is feasible to obtain ratings of intern handoff performance from peers. Although there is evidence of rater bias toward leniency and low inter‐rater reliability, peer ratings of intern performance did increase over time. In addition, peer ratings were lower for interns who were handing off their postcall service. Working on a rotation at a community affiliate with a lower census was associated with higher peer ratings of handoffs.

It is worth considering the mechanism of these findings. First, the leniency observed in peer ratings likely reflects peers unwilling to critique each other due to a desire for an esprit de corps among their classmates. The low intraclass correlation coefficient for ratings of the same intern highlight that peers do not easily converge on their ratings of the same intern. Nevertheless, the ratings on the peer evaluation did demonstrate improvements over time. This improvement could easily reflect on‐the‐job learning, as interns become more acquainted with their roles and efficient and competent in their tasks. Together, these data provide a foundation for developing milestone handoffs that reflect the natural progression of intern competence in handoffs. For example, communication appeared to improve at 3 months, whereas transfer of professional responsibility improved at 6 months after beginning internship. However, alternative explanations are also important to consider. Although it is easy and somewhat reassuring to assume that increases over time reflect a learning effect, it is also possible that interns are unwilling to critique their peers as familiarity with them increases.

There are several reasons why postcall interns could have been universally rated lower than nonpostcall interns. First, postcall interns likely had the sickest patients with the most to‐do tasks or work associated with their sign‐out because they were handing off newly admitted patients. Because the postcall sign‐out is associated with the highest workload, it may be that interns perceive that a good handoff is nothing to do, and handoffs associated with more work are not highly rated. It is also important to note that postcall interns, who in this study were at the end of a 30‐hour duty shift, were also most fatigued and overworked, which may have also affected the handoff, especially in the 2 domains of interest. Due to the time pressure to leave coupled with fatigue, they may have had less time to invest in written sign‐out quality and may not have been receptive to feedback on their performance. Likewise, performance on handoffs was rated higher when at the community hospital, which could be due to several reasons. The most plausible explanation is that the workload associated with that sign‐out is less due to lower patient census and lower patient acuity. In the community hospital, fewer residents were also geographically co‐located on a quieter ward and work room area, which may contribute to higher ratings across domains.

This study also has implications for future efforts to improve and evaluate handoff performance in residency trainees. For example, our findings suggest the importance of enhancing supervision and training for handoffs during high workload rotations or certain times of the year. In addition, evaluation systems for handoff performance that rely solely on peer evaluation will not likely yield an accurate picture of handoff performance, difficulty obtaining peer evaluations, the halo effect, and other forms of evaluator bias in ratings. Accurate handoff evaluation may require direct observation of verbal communication and faculty audit of written sign‐outs.[16, 17] Moreover, methods such as appreciative inquiry can help identify the peers with the best practices to emulate.[18] Future efforts to validate peer assessment of handoffs against these other assessment methods, such as direct observation by service attendings, are needed.

There are limitations to this study. First, although we have limited our findings to 1 residency program with 1 type of rotation, we have already expanded to a community residency program that used a float system and have disseminated our tool to several other institutions. In addition, we have a small number of participants, and our 60% return rate on monthly peer evaluations raises concerns of nonresponse bias. For example, a peer who perceived the handoff performance of an intern to be poor may be less likely to return the evaluation. Because our dataset has been deidentified per institutional review board request, we do not have any information to differentiate systematic reasons for not responding to the evaluation. Anecdotally, a critique of the tool is that it is lengthy, especially in light of the fact that 1 intern completes 3 additional handoff evaluations. It is worth understanding why the instrument had such a high internal consistency. Although the items were designed to address different competencies initially, peers may make a global assessment about someone's ability to perform a handoff and then fill out the evaluation accordingly. This speaks to the difficulty in evaluating the subcomponents of various actions related to the handoff. Because of the high internal consistency, we were able to shorten the survey to a 5‐item instrument with a Cronbach of 0.93, which we are currently using in our program and have disseminated to other programs. Although it is currently unclear if the ratings of performance on the longer peer evaluation are valid, we are investigating concurrent validity of the shorter tool by comparing peer evaluations to other measures of handoff quality as part of our current work. Last, we are only able to test associations and not make causal inferences.

CONCLUSION

Peer assessment of handoff skills is feasible via an electronic competency‐based tool. Although there is evidence of score inflation, intern performance does increase over time and is associated with various aspects of workload, such as postcall status or working on a rotation at a community affiliate with a lower census. Together, these data can provide a foundation for developing milestones handoffs that reflect the natural progression of intern competence in handoffs.

Acknowledgments

The authors thank the University of Chicago Medicine residents and chief residents, the members of the Curriculum and Housestaff Evaluation Committee, Tyrece Hunter and Amy Ice‐Gibson, and Meryl Prochaska and Laura Ruth Venable for assistance with manuscript preparation.

Disclosures

This study was funded by the University of Chicago Department of Medicine Clinical Excellence and Medical Education Award and AHRQ R03 5R03HS018278‐02 Development of and Validation of a Tool to Evaluate Hand‐off Quality.

References
  1. Nasca TJ, Day SH, Amis ES; the ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010; 363.
  2. Common program requirements. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed December 10, 2012.
  3. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: developmental milestones for internal medicine residency training. J Grad Med Educ. 2009;1(1):520.
  4. Greenberg CC, Regenbogen SE, Studdert DM, et al. Patterns of communication breakdowns resulting in injury to surgical patients. J Am Coll Surg. 2007;204(4):533540.
  5. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50(1):5763.
  6. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  7. Gibson SC, Ham JJ, Apker J, Mallak LA, Johnson NA. Communication, communication, communication: the art of the handoff. Ann Emerg Med. 2010;55(2):181183.
  8. Arnold L, Willouby L, Calkins V, Gammon L, Eberhardt G. Use of peer evaluation in the assessment of medical students. J Med Educ. 1981;56:3542.
  9. Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269:16551660.
  10. Thomas PA, Gebo KA, Hellmann DB. A pilot study of peer review in residency training. J Gen Intern Med. 1999;14(9):551554.
  11. ACGME Program Requirements for Graduate Medical Education in Internal Medicine Effective July 1, 2009. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_07012009.pdf. Accessed December 10, 2012.
  12. Arora V, Dunphy C, Chang VY, Ahmad F, Humphrey HJ, Meltzer D. The effects of on‐duty napping on intern sleep time and fatigue. Ann Intern Med. 2006;144(11):792798.
  13. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  14. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365‐2702.2012.04131.x.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  17. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  18. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
References
  1. Nasca TJ, Day SH, Amis ES; the ACGME Duty Hour Task Force. The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010; 363.
  2. Common program requirements. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed December 10, 2012.
  3. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: developmental milestones for internal medicine residency training. J Grad Med Educ. 2009;1(1):520.
  4. Greenberg CC, Regenbogen SE, Studdert DM, et al. Patterns of communication breakdowns resulting in injury to surgical patients. J Am Coll Surg. 2007;204(4):533540.
  5. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50(1):5763.
  6. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  7. Gibson SC, Ham JJ, Apker J, Mallak LA, Johnson NA. Communication, communication, communication: the art of the handoff. Ann Emerg Med. 2010;55(2):181183.
  8. Arnold L, Willouby L, Calkins V, Gammon L, Eberhardt G. Use of peer evaluation in the assessment of medical students. J Med Educ. 1981;56:3542.
  9. Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269:16551660.
  10. Thomas PA, Gebo KA, Hellmann DB. A pilot study of peer review in residency training. J Gen Intern Med. 1999;14(9):551554.
  11. ACGME Program Requirements for Graduate Medical Education in Internal Medicine Effective July 1, 2009. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_07012009.pdf. Accessed December 10, 2012.
  12. Arora V, Dunphy C, Chang VY, Ahmad F, Humphrey HJ, Meltzer D. The effects of on‐duty napping on intern sleep time and fatigue. Ann Intern Med. 2006;144(11):792798.
  13. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129134.
  14. Horwitz LI, Dombroski J, Murphy TE, Farnan JM, Johnson JK, Arora VM. Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365‐2702.2012.04131.x.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Gakhar B, Spencer AL. Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):11821188.
  17. Bump GM, Bost JE, Buranosky R, Elnicki M. Faculty member review and feedback using a sign‐out checklist: improving intern written sign‐out. Acad Med. 2012;87(8):11251131.
  18. Helms AS, Perez TE, Baltz J, et al. Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287291.
Issue
Journal of Hospital Medicine - 8(3)
Issue
Journal of Hospital Medicine - 8(3)
Page Number
132-136
Page Number
132-136
Article Type
Display Headline
Implementing Peer Evaluation of Handoffs: Associations With Experience and Workload
Display Headline
Implementing Peer Evaluation of Handoffs: Associations With Experience and Workload
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Vineet Arora MD, University of Chicago, 5841 S Maryland Ave., MC 2007 AMB W216, Chicago, IL 60637; Tel.: (773) 702‐8157, Fax: (773) 834‐2238; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Medicare Funding May Become Enormous Burden for Generations of Future Taxpayers

Article Type
Changed
Wed, 03/27/2019 - 12:25
Display Headline
Medicare Funding May Become Enormous Burden for Generations of Future Taxpayers

February 2033

Dear sons:

Now that most of my baby boomer friends are 80 or 90 years old and are still hanging on, I wanted to apologize for leaving you in such a mess. Looking back, we all should have made some tough choices back in 2013, when some thoughtful belt-tightening would have created a fiscally sound ability for our country to provide healthcare and a safety net, not only to our senior citizens, but to all Americans. After today’s riots across the country, I felt I had to reach out to you and beg you to let rational minds prevail.

My fellow seniors, who paid into the Medicare and Social Security programs through our payroll taxes during the 30 to 40 years we worked in American industries, believe we are entitled to live forever with unlimited healthcare paid for by you. We are lined up almost every day at one doctor’s office or another to have our fourth joint replacement or our monthly MRI. Even though the actuaries tell us we all blew through our own contributions to Medicare sometime around our 75th birthdays, the general thinking of my friends on the golf course is that we paid for our parents’ healthcare and retirement, and you should just suck it up and stop whining.

Our generation did some great things with immunizations, cancer prevention, reducing the risks of coronary heart disease, and stroke prevention and treatment. The end result is that many of those who would have died earlier have lived beyond our country's means to provide for them.

Now I do admit that my friends tend to overlook the fact that when we were just in our 50s, like you are now, there were eight or nine workers (i.e. taxpayers) for every retiree. Now it seems it is one taxpayer working to support one retiree. The math just doesn’t work anymore. No wonder your tax burden is so suffocating that young workers can’t afford a home or a second car or even a vacation. I can see why there is talk by some of rationing care, but some of the rhetoric is kind of frightening.

Yes, there are more 90-year-olds with severe dementia on chronic dialysis than I would like to see. I don’t necessarily agree that everyone has a right to die with a normal BUN. Our generation did some great things with immunizations, cancer prevention, reducing the risks of coronary heart disease, and stroke prevention and treatment. The end result is that many of those who would have died earlier have lived beyond our country’s means to provide for them. For heaven’s sake, there are more than 1 million Americans over the age of 100 today. Once a woman gets past 65, it seems they are destined to live indefinitely.

Believe it or not, I was around in the 1960s when Medicare was first discussed and people were looking at life expectancies in the early 70s. No one saw the advent of so much expensive technology in diagnostic testing and surgical intervention. Despite more bipartisan national commissions and reports than I care to remember, no president or Congress has had the cojones to make the tough choices to provide the basic health needs for seniors in a fiscally sound system that doesn’t overwhelm the workforce.

I know the slogans urge a move from Medicare to “MediCan’t.” I know some want to bar seniors from getting flu shots and want to have pneumonia be the old man’s friend again. I sense a feeling that the elderly are becoming the enemy of the working class. I hear the rants that most of our nation’s wealth is held by those over 65, yet my generation wants more and more, feeling we paid for this and we deserve everything we have coming to us.

 

 

Once again, sorry this all had to fall on you, but I have got to run. I am going to see your grandmother. I can’t believe how well she is recovering from arthroscopic surgery. Pretty amazing for someone who is 105 years old.

Love,

Dad


Dr. Wellikson is CEO of SHM.

Issue
The Hospitalist - 2013(02)
Publications
Topics
Sections

February 2033

Dear sons:

Now that most of my baby boomer friends are 80 or 90 years old and are still hanging on, I wanted to apologize for leaving you in such a mess. Looking back, we all should have made some tough choices back in 2013, when some thoughtful belt-tightening would have created a fiscally sound ability for our country to provide healthcare and a safety net, not only to our senior citizens, but to all Americans. After today’s riots across the country, I felt I had to reach out to you and beg you to let rational minds prevail.

My fellow seniors, who paid into the Medicare and Social Security programs through our payroll taxes during the 30 to 40 years we worked in American industries, believe we are entitled to live forever with unlimited healthcare paid for by you. We are lined up almost every day at one doctor’s office or another to have our fourth joint replacement or our monthly MRI. Even though the actuaries tell us we all blew through our own contributions to Medicare sometime around our 75th birthdays, the general thinking of my friends on the golf course is that we paid for our parents’ healthcare and retirement, and you should just suck it up and stop whining.

Our generation did some great things with immunizations, cancer prevention, reducing the risks of coronary heart disease, and stroke prevention and treatment. The end result is that many of those who would have died earlier have lived beyond our country's means to provide for them.

Now I do admit that my friends tend to overlook the fact that when we were just in our 50s, like you are now, there were eight or nine workers (i.e. taxpayers) for every retiree. Now it seems it is one taxpayer working to support one retiree. The math just doesn’t work anymore. No wonder your tax burden is so suffocating that young workers can’t afford a home or a second car or even a vacation. I can see why there is talk by some of rationing care, but some of the rhetoric is kind of frightening.

Yes, there are more 90-year-olds with severe dementia on chronic dialysis than I would like to see. I don’t necessarily agree that everyone has a right to die with a normal BUN. Our generation did some great things with immunizations, cancer prevention, reducing the risks of coronary heart disease, and stroke prevention and treatment. The end result is that many of those who would have died earlier have lived beyond our country’s means to provide for them. For heaven’s sake, there are more than 1 million Americans over the age of 100 today. Once a woman gets past 65, it seems they are destined to live indefinitely.

Believe it or not, I was around in the 1960s when Medicare was first discussed and people were looking at life expectancies in the early 70s. No one saw the advent of so much expensive technology in diagnostic testing and surgical intervention. Despite more bipartisan national commissions and reports than I care to remember, no president or Congress has had the cojones to make the tough choices to provide the basic health needs for seniors in a fiscally sound system that doesn’t overwhelm the workforce.

I know the slogans urge a move from Medicare to “MediCan’t.” I know some want to bar seniors from getting flu shots and want to have pneumonia be the old man’s friend again. I sense a feeling that the elderly are becoming the enemy of the working class. I hear the rants that most of our nation’s wealth is held by those over 65, yet my generation wants more and more, feeling we paid for this and we deserve everything we have coming to us.

 

 

Once again, sorry this all had to fall on you, but I have got to run. I am going to see your grandmother. I can’t believe how well she is recovering from arthroscopic surgery. Pretty amazing for someone who is 105 years old.

Love,

Dad


Dr. Wellikson is CEO of SHM.

February 2033

Dear sons:

Now that most of my baby boomer friends are 80 or 90 years old and are still hanging on, I wanted to apologize for leaving you in such a mess. Looking back, we all should have made some tough choices back in 2013, when some thoughtful belt-tightening would have created a fiscally sound ability for our country to provide healthcare and a safety net, not only to our senior citizens, but to all Americans. After today’s riots across the country, I felt I had to reach out to you and beg you to let rational minds prevail.

My fellow seniors, who paid into the Medicare and Social Security programs through our payroll taxes during the 30 to 40 years we worked in American industries, believe we are entitled to live forever with unlimited healthcare paid for by you. We are lined up almost every day at one doctor’s office or another to have our fourth joint replacement or our monthly MRI. Even though the actuaries tell us we all blew through our own contributions to Medicare sometime around our 75th birthdays, the general thinking of my friends on the golf course is that we paid for our parents’ healthcare and retirement, and you should just suck it up and stop whining.

Our generation did some great things with immunizations, cancer prevention, reducing the risks of coronary heart disease, and stroke prevention and treatment. The end result is that many of those who would have died earlier have lived beyond our country's means to provide for them.

Now I do admit that my friends tend to overlook the fact that when we were just in our 50s, like you are now, there were eight or nine workers (i.e. taxpayers) for every retiree. Now it seems it is one taxpayer working to support one retiree. The math just doesn’t work anymore. No wonder your tax burden is so suffocating that young workers can’t afford a home or a second car or even a vacation. I can see why there is talk by some of rationing care, but some of the rhetoric is kind of frightening.

Yes, there are more 90-year-olds with severe dementia on chronic dialysis than I would like to see. I don’t necessarily agree that everyone has a right to die with a normal BUN. Our generation did some great things with immunizations, cancer prevention, reducing the risks of coronary heart disease, and stroke prevention and treatment. The end result is that many of those who would have died earlier have lived beyond our country’s means to provide for them. For heaven’s sake, there are more than 1 million Americans over the age of 100 today. Once a woman gets past 65, it seems they are destined to live indefinitely.

Believe it or not, I was around in the 1960s when Medicare was first discussed and people were looking at life expectancies in the early 70s. No one saw the advent of so much expensive technology in diagnostic testing and surgical intervention. Despite more bipartisan national commissions and reports than I care to remember, no president or Congress has had the cojones to make the tough choices to provide the basic health needs for seniors in a fiscally sound system that doesn’t overwhelm the workforce.

I know the slogans urge a move from Medicare to “MediCan’t.” I know some want to bar seniors from getting flu shots and want to have pneumonia be the old man’s friend again. I sense a feeling that the elderly are becoming the enemy of the working class. I hear the rants that most of our nation’s wealth is held by those over 65, yet my generation wants more and more, feeling we paid for this and we deserve everything we have coming to us.

 

 

Once again, sorry this all had to fall on you, but I have got to run. I am going to see your grandmother. I can’t believe how well she is recovering from arthroscopic surgery. Pretty amazing for someone who is 105 years old.

Love,

Dad


Dr. Wellikson is CEO of SHM.

Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Topics
Article Type
Display Headline
Medicare Funding May Become Enormous Burden for Generations of Future Taxpayers
Display Headline
Medicare Funding May Become Enormous Burden for Generations of Future Taxpayers
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Southern Hospital Medicine Conference Drives Home the Value of Hospitalists

Article Type
Changed
Wed, 03/27/2019 - 12:25
Display Headline
Southern Hospital Medicine Conference Drives Home the Value of Hospitalists

More than 300 hospitalists and other clinicians recently attended the 13th annual Southern Hospital Medicine Conference in Atlanta. The conference is a joint collaboration between the Emory University School of Medicine in Atlanta and Ochsner Health System New Orleans. The meeting site has alternated between the two cities each year since 2005.

The prevailing conference theme in 2012 was “Value and Values in Hospital Medicine,” alluding to the value that hospitalists bring to the medical community and hospitals and the values shared by hospitalists. The conference offered five pre-courses and more than 50 sessions focused on educating hospitalists on current best practices within core topic areas, including clinical care, quality improvement, healthcare information technology, innovative care models, systems of care, and transitions of care. A judged poster competition featured research and clinical vignettes abstracts, with interesting clinical cases as well as new research in hospital medicine.

One of the highlights of this year’s conference was the keynote address delivered by Dr. William A. Bornstein, chief quality and medical officer of Emory Healthcare. Dr. Bornstein discussed the various aspects of quality and cost in hospital care. He described the challenges in defining quality and measuring cost when trying to calculate the “value” equation in medicine (value=quality/cost). He outlined the Institute of Medicine’s previously described STEEEP (safe, timely, effective, efficient, equitable, patient-centered) aims of quality in 2001.

Dr. Bornstein’s own definition for quality is “partnering with patients and families to reliably, 100% of the time, deliver when, where, and how they want it—and with minimal waste—care based on the best available evidence and consistent with patient and family values and preferences.” To measure outcome, he said, we need to address system structure (what’s in place before the patient arrives), process (what we do for the patient), and culture (how we can get the buy-in from all stakeholders). The sum of these factors achieves outcome, which requires risk adjustment and, ideally, long-term follow-up data, he said.

Dr. Bornstein also discussed the need to develop standard processes whereby equivalent clinicians can follow similar processes to achieve the same results. When physicians “do it the same” (i.e. standardized protocols), error rates and cost decrease, he explained.

Dr. Bornstein also focused on transformative solutions to address problems in healthcare as a whole, rather than attempting to fix problems piecemeal.

Jason Stein, MD, SFHM, offered another conference highlight: a pre-conference program and plenary session on an innovative approach to improve hospital outcomes through implementation of the accountable-care unit (ACU). Dr. Stein, director of the clinical research program at Emory School of Medicine, described the current state of hospital care as asynchronous, with various providers caring for the patient without much coordination. For example, the physician sees the patient at 9 a.m., followed by the nurse at 10 a.m., and then finally the visiting family at 11 a.m. The ACU model of care would involve all the providers rounding with the patient and family at a scheduled time daily to provide synchronous care.

Dr. Stein described an ACU as a geographic inpatient area consistently responsible for the clinical, service, and cost outcomes it produces. Features of this unit include:

  • Assignment of physicians by units to enhance predictability;
  • Cohesiveness and communication;
  • Structured interdisciplinary bedside rounds to consistently deliver evidence-based, patient-centered care;
  • Evaluation of performance data by unit instead of facility or service line; and
  • A dyad partnership involving a nurse unit director and a physician unit medical director.

ACU implementation at Emory has led to decreased mortality, reduced length of stay, and improved patient satisfaction compared to traditional units, according to Dr. Stein. While the ACU might not be suited for all, he said, all hospitals can learn from various components of this innovative approach to deliver better patient care.

 

 

The ever-changing state of HM in the U.S. remains a challenge, but it continues to generate innovation and excitement. The high number of engaged participants from 30 different states attending the 13th annual Southern Hospital Medicine Conference demonstrates that hospitalists are eager to learn and ready to improve their practice in order to provide high-value healthcare in U.S. hospitals today.


Dr. Lee is vice chairman in the department of hospital medicine at Ochsner Health System. Dr. Smith is an assistant director for education in the division of hospital medicine at Emory University. Dr. Deitelzweig is system chairman in the department of hospital medicine and medical director for regional business development at Ochsner Health System. Dr. Wang is the division director of hospital medicine at Emory University. Dr. Dressler is director for education in the division of hospital medicine and an associate program director for the J. Willis Hurst Internal Medicine Residency Program at Emory University.

Issue
The Hospitalist - 2013(02)
Publications
Topics
Sections

More than 300 hospitalists and other clinicians recently attended the 13th annual Southern Hospital Medicine Conference in Atlanta. The conference is a joint collaboration between the Emory University School of Medicine in Atlanta and Ochsner Health System New Orleans. The meeting site has alternated between the two cities each year since 2005.

The prevailing conference theme in 2012 was “Value and Values in Hospital Medicine,” alluding to the value that hospitalists bring to the medical community and hospitals and the values shared by hospitalists. The conference offered five pre-courses and more than 50 sessions focused on educating hospitalists on current best practices within core topic areas, including clinical care, quality improvement, healthcare information technology, innovative care models, systems of care, and transitions of care. A judged poster competition featured research and clinical vignettes abstracts, with interesting clinical cases as well as new research in hospital medicine.

One of the highlights of this year’s conference was the keynote address delivered by Dr. William A. Bornstein, chief quality and medical officer of Emory Healthcare. Dr. Bornstein discussed the various aspects of quality and cost in hospital care. He described the challenges in defining quality and measuring cost when trying to calculate the “value” equation in medicine (value=quality/cost). He outlined the Institute of Medicine’s previously described STEEEP (safe, timely, effective, efficient, equitable, patient-centered) aims of quality in 2001.

Dr. Bornstein’s own definition for quality is “partnering with patients and families to reliably, 100% of the time, deliver when, where, and how they want it—and with minimal waste—care based on the best available evidence and consistent with patient and family values and preferences.” To measure outcome, he said, we need to address system structure (what’s in place before the patient arrives), process (what we do for the patient), and culture (how we can get the buy-in from all stakeholders). The sum of these factors achieves outcome, which requires risk adjustment and, ideally, long-term follow-up data, he said.

Dr. Bornstein also discussed the need to develop standard processes whereby equivalent clinicians can follow similar processes to achieve the same results. When physicians “do it the same” (i.e. standardized protocols), error rates and cost decrease, he explained.

Dr. Bornstein also focused on transformative solutions to address problems in healthcare as a whole, rather than attempting to fix problems piecemeal.

Jason Stein, MD, SFHM, offered another conference highlight: a pre-conference program and plenary session on an innovative approach to improve hospital outcomes through implementation of the accountable-care unit (ACU). Dr. Stein, director of the clinical research program at Emory School of Medicine, described the current state of hospital care as asynchronous, with various providers caring for the patient without much coordination. For example, the physician sees the patient at 9 a.m., followed by the nurse at 10 a.m., and then finally the visiting family at 11 a.m. The ACU model of care would involve all the providers rounding with the patient and family at a scheduled time daily to provide synchronous care.

Dr. Stein described an ACU as a geographic inpatient area consistently responsible for the clinical, service, and cost outcomes it produces. Features of this unit include:

  • Assignment of physicians by units to enhance predictability;
  • Cohesiveness and communication;
  • Structured interdisciplinary bedside rounds to consistently deliver evidence-based, patient-centered care;
  • Evaluation of performance data by unit instead of facility or service line; and
  • A dyad partnership involving a nurse unit director and a physician unit medical director.

ACU implementation at Emory has led to decreased mortality, reduced length of stay, and improved patient satisfaction compared to traditional units, according to Dr. Stein. While the ACU might not be suited for all, he said, all hospitals can learn from various components of this innovative approach to deliver better patient care.

 

 

The ever-changing state of HM in the U.S. remains a challenge, but it continues to generate innovation and excitement. The high number of engaged participants from 30 different states attending the 13th annual Southern Hospital Medicine Conference demonstrates that hospitalists are eager to learn and ready to improve their practice in order to provide high-value healthcare in U.S. hospitals today.


Dr. Lee is vice chairman in the department of hospital medicine at Ochsner Health System. Dr. Smith is an assistant director for education in the division of hospital medicine at Emory University. Dr. Deitelzweig is system chairman in the department of hospital medicine and medical director for regional business development at Ochsner Health System. Dr. Wang is the division director of hospital medicine at Emory University. Dr. Dressler is director for education in the division of hospital medicine and an associate program director for the J. Willis Hurst Internal Medicine Residency Program at Emory University.

More than 300 hospitalists and other clinicians recently attended the 13th annual Southern Hospital Medicine Conference in Atlanta. The conference is a joint collaboration between the Emory University School of Medicine in Atlanta and Ochsner Health System New Orleans. The meeting site has alternated between the two cities each year since 2005.

The prevailing conference theme in 2012 was “Value and Values in Hospital Medicine,” alluding to the value that hospitalists bring to the medical community and hospitals and the values shared by hospitalists. The conference offered five pre-courses and more than 50 sessions focused on educating hospitalists on current best practices within core topic areas, including clinical care, quality improvement, healthcare information technology, innovative care models, systems of care, and transitions of care. A judged poster competition featured research and clinical vignettes abstracts, with interesting clinical cases as well as new research in hospital medicine.

One of the highlights of this year’s conference was the keynote address delivered by Dr. William A. Bornstein, chief quality and medical officer of Emory Healthcare. Dr. Bornstein discussed the various aspects of quality and cost in hospital care. He described the challenges in defining quality and measuring cost when trying to calculate the “value” equation in medicine (value=quality/cost). He outlined the Institute of Medicine’s previously described STEEEP (safe, timely, effective, efficient, equitable, patient-centered) aims of quality in 2001.

Dr. Bornstein’s own definition for quality is “partnering with patients and families to reliably, 100% of the time, deliver when, where, and how they want it—and with minimal waste—care based on the best available evidence and consistent with patient and family values and preferences.” To measure outcome, he said, we need to address system structure (what’s in place before the patient arrives), process (what we do for the patient), and culture (how we can get the buy-in from all stakeholders). The sum of these factors achieves outcome, which requires risk adjustment and, ideally, long-term follow-up data, he said.

Dr. Bornstein also discussed the need to develop standard processes whereby equivalent clinicians can follow similar processes to achieve the same results. When physicians “do it the same” (i.e. standardized protocols), error rates and cost decrease, he explained.

Dr. Bornstein also focused on transformative solutions to address problems in healthcare as a whole, rather than attempting to fix problems piecemeal.

Jason Stein, MD, SFHM, offered another conference highlight: a pre-conference program and plenary session on an innovative approach to improve hospital outcomes through implementation of the accountable-care unit (ACU). Dr. Stein, director of the clinical research program at Emory School of Medicine, described the current state of hospital care as asynchronous, with various providers caring for the patient without much coordination. For example, the physician sees the patient at 9 a.m., followed by the nurse at 10 a.m., and then finally the visiting family at 11 a.m. The ACU model of care would involve all the providers rounding with the patient and family at a scheduled time daily to provide synchronous care.

Dr. Stein described an ACU as a geographic inpatient area consistently responsible for the clinical, service, and cost outcomes it produces. Features of this unit include:

  • Assignment of physicians by units to enhance predictability;
  • Cohesiveness and communication;
  • Structured interdisciplinary bedside rounds to consistently deliver evidence-based, patient-centered care;
  • Evaluation of performance data by unit instead of facility or service line; and
  • A dyad partnership involving a nurse unit director and a physician unit medical director.

ACU implementation at Emory has led to decreased mortality, reduced length of stay, and improved patient satisfaction compared to traditional units, according to Dr. Stein. While the ACU might not be suited for all, he said, all hospitals can learn from various components of this innovative approach to deliver better patient care.

 

 

The ever-changing state of HM in the U.S. remains a challenge, but it continues to generate innovation and excitement. The high number of engaged participants from 30 different states attending the 13th annual Southern Hospital Medicine Conference demonstrates that hospitalists are eager to learn and ready to improve their practice in order to provide high-value healthcare in U.S. hospitals today.


Dr. Lee is vice chairman in the department of hospital medicine at Ochsner Health System. Dr. Smith is an assistant director for education in the division of hospital medicine at Emory University. Dr. Deitelzweig is system chairman in the department of hospital medicine and medical director for regional business development at Ochsner Health System. Dr. Wang is the division director of hospital medicine at Emory University. Dr. Dressler is director for education in the division of hospital medicine and an associate program director for the J. Willis Hurst Internal Medicine Residency Program at Emory University.

Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Topics
Article Type
Display Headline
Southern Hospital Medicine Conference Drives Home the Value of Hospitalists
Display Headline
Southern Hospital Medicine Conference Drives Home the Value of Hospitalists
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Tips to Help Hospital Medicine Group Leaders Know When to Grow Their Service

Article Type
Changed
Fri, 09/14/2018 - 12:20
Display Headline
Tips to Help Hospital Medicine Group Leaders Know When to Grow Their Service

SHM board member Burke Kealey, MD, SFHM, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn., says there is no easy way to know when it is the right time to grow. He offers four tips to hospitalist group leaders grappling with the question:

  • Benchmark: Use the SHM survey, MGMA data, or local analyses to determine best practices. But don’t be a slave to data that don’t account for the particulars of your payor mix, patient population, etc.
  • Network: Meet with group leaders in nearby practices. Talk to administrators. Understand the competitive set for your hospital and know what their data sets are.
  • Communicate: Talk to doctors, C-suite executives, and everyone in between. Front-line physicians and nurses often know better than practice heads which resources are needed, and where.
  • Stay flexible: Don’t be wedded to needing to grow. Maybe a group has physicians who want extra shifts to handle a new schedule. Maybe the installation of new technology will improve efficiency and eliminate the need for a new physician.
Issue
The Hospitalist - 2013(02)
Publications
Sections

SHM board member Burke Kealey, MD, SFHM, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn., says there is no easy way to know when it is the right time to grow. He offers four tips to hospitalist group leaders grappling with the question:

  • Benchmark: Use the SHM survey, MGMA data, or local analyses to determine best practices. But don’t be a slave to data that don’t account for the particulars of your payor mix, patient population, etc.
  • Network: Meet with group leaders in nearby practices. Talk to administrators. Understand the competitive set for your hospital and know what their data sets are.
  • Communicate: Talk to doctors, C-suite executives, and everyone in between. Front-line physicians and nurses often know better than practice heads which resources are needed, and where.
  • Stay flexible: Don’t be wedded to needing to grow. Maybe a group has physicians who want extra shifts to handle a new schedule. Maybe the installation of new technology will improve efficiency and eliminate the need for a new physician.

SHM board member Burke Kealey, MD, SFHM, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn., says there is no easy way to know when it is the right time to grow. He offers four tips to hospitalist group leaders grappling with the question:

  • Benchmark: Use the SHM survey, MGMA data, or local analyses to determine best practices. But don’t be a slave to data that don’t account for the particulars of your payor mix, patient population, etc.
  • Network: Meet with group leaders in nearby practices. Talk to administrators. Understand the competitive set for your hospital and know what their data sets are.
  • Communicate: Talk to doctors, C-suite executives, and everyone in between. Front-line physicians and nurses often know better than practice heads which resources are needed, and where.
  • Stay flexible: Don’t be wedded to needing to grow. Maybe a group has physicians who want extra shifts to handle a new schedule. Maybe the installation of new technology will improve efficiency and eliminate the need for a new physician.
Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Article Type
Display Headline
Tips to Help Hospital Medicine Group Leaders Know When to Grow Their Service
Display Headline
Tips to Help Hospital Medicine Group Leaders Know When to Grow Their Service
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Fundamentals of Highly Reliable Organizations Could Benefit Hospitalists

Article Type
Changed
Fri, 09/14/2018 - 12:20
Display Headline
Fundamentals of Highly Reliable Organizations Could Benefit Hospitalists

Reliability. This sounds like a decent trait. Who wouldn’t want to be described as “reliable”? It sounds reputable whether you’re a person, a car, or a dishwasher. So how does one become or emulate the trait of being reliable, one who is predictable, punctua—“reproducible,” if you will?

Organizational reliability has received a fair bit of press these days. The industries that have come to embrace reliability concepts are those in which failure is easy to come by, and those in which failure is likely to be catastrophic if it occurs. In the medical industry, failure occurs to people, not widgets or machines, so by definition it tends to be catastrophic. These failures generally come in three flavors:

What they and others have found in their research of highly reliable organizations is not that they have failure-free operations, but that they continuously and “mindfully” think about how to be failure-free. Inattention and complacency are the biggest threats to reliability.

  • The expected fails to occur (i.e. a patient with pneumonia does not receive their antibiotics on time);
  • The unexpected occurs (i.e. a patient falls and breaks their hip); or
  • The unexpected was not previously thought of (i.e. low-risk patient has a PEA arrest).

A fair bit of research has been done on how organizations can become more reliable. In their book “Managing the Unexpected: Assuring High Performance in an Age of Complexity,”1 Karl Weick and Kathleen Sutcliffe studied firefighting, workers on aircraft carriers, and nuclear power plant employees. They all have in common the fundamental similarity that failure in their workplace is catastrophically dangerous, and that they must continuously strive to reduce the risk and/or mitigate effectively. The Agency for Healthcare Research and Quality (AHRQ) specifically studied, through case studies and site visits, how some healthcare organizations have achieved some success in the different domains of reliability.2

What both studies found is that there are five prerequisites that, if done well, lead to an organizational “state of mindfulness.” What they and others have found in their research of highly reliable organizations (HROs) is not that they have failure-free operations, but that they continuously and “mindfully” think about how to be failure-free. Inattention and complacency are the biggest threats to reliability.

The Fundamentals

The first prerequisite is sensitivity to operations. This refers to actively seeking information on how things actually are working, instead of how they are supposed to be working. It is being acutely aware of all operations, including the smallest details: Does the patient have an armband on? Is the nurse washing their hands? Is the whiteboard information correct? Is the bed alarm enabled? It is the state of mind when everyone knows how things should work, look, feel, sound, and can recognize when something is out of bounds.

The next prerequisite is a preoccupation with failure. This refers to a system in which failure and near-misses are completely transparent, and openly and honestly discussed (without inciting individual blame or punitive action), and learned from communally. This “group thought” continually reaffirms the fact that systems, and everyone in them, are completely fallible to errors. It is the complete opposite of inattention and complacency. It is continuously asking “What can go wrong, how can it go wrong, when will it go wrong, and how can I stop it?”

The next prerequisite is reluctance to oversimplify. This does not imply that simplicity is bad, but that oversimplicity is lethal. It forces people and organizations to avoid shortcuts and to not rely on simplistic explanations for situations that need to be complicated. Think of this as making a complicated soufflé; if you leave out a step or an ingredient, the product will be far from a soufflé.

 

 

The next prerequisite is deference to expertise. This principle recognizes that authority and/or rank are not equivalent to expertise. This assumes that people and organizations are willing and able to defer decision-making to the person who will make the best decision, not to who ranks highest in the organizational chart. A junior hospitalist might be much more likely to make a good decision on building a new order set than the hospitalist director is.

The last prerequisite is resilience. Webster’s defines resilience as “the capability of a strained body to recover its size and shape after deformation caused especially by compressive stress … an ability to recover from or adjust easily to misfortune or change.” The “compressive stresses” and “misfortune or change” can present in a number of different ways, including bad patient outcomes, bad national press, or bad hospital rankings. A good HRO is not one that does not experience unexpected events, but is one that is not disabled by them. They routinely train and practice for worst-case scenarios. It is easy to “audit” resilience by looking at the organizational response to unexpected events. Are they handled with grace, ease, and speed, or with panic, anxiety, and ongoing uncertainty? Resilience involves adequately functioning despite adversity, recovering well, and learning from the experience.

Take-Home Message

The first three principles relate to how organizations can anticipate and reduce the risk of failure; the last two principles relate to how organizations mitigate the extent or severity of failure when it occurs. Together, they create the state of mindfulness, in which all senses are open and alert for signs of aberrations in the system, and where there is continuous learning of how to make the system function better.

What does this mean for a hospitalist to function in an HRO? Most hospitalists are on the front lines, where they routinely see where and how things can fail. They need to resist the urge to become complacent and remain continuously alert to signals that the system is not functioning for the safety of the patient. And when things do go awry, they need to be part of the resilience plan, to work with their teams to swiftly and effectively mitigate ongoing risks, and defer decision to expertise and not necessarily authority.

It also requires that each of us work within multidisciplinary teams in which all members add to the “state of mindfulness,” including the patient and their families (who very often note “aberrancies” before anyone else does). Think of your hospital as ascribed by Gordon Bethune, the former CEO of Continental Airlines. When asked why all employees received a bonus for on-time departure (instead of only employees on the front line), he held up his wristwatch and said, “What part of this watch don’t you think we need?”

Hospitalists can be powerful motivators for a culture change that empowers all hospital employees to be engaged in anticipating and managing failures—just by being mindful. This is a great place to start.


Dr. Scheurer is a hospitalist and chief quality officer at the Medical University of South Carolina in Charleston. She is physician editor of The Hospitalist. Email her at [email protected].

References

  1. Weick KE, Sutcliffe KM. Managing the unexpected: Resilient performance in an age of uncertainty, 2nd ed. 2007: Hoboken, NJ: John Wiley & Sons Inc.
  2. Agency for Healthcare Research and Quality. Becoming a high reliability organization: operational advice for hospital leaders. Agency for Healthcare Research and Quality website. Available at: http://www.ahrq.gov/qual/hroadvice/. Accessed Dec. 10, 2012.
Issue
The Hospitalist - 2013(02)
Publications
Sections

Reliability. This sounds like a decent trait. Who wouldn’t want to be described as “reliable”? It sounds reputable whether you’re a person, a car, or a dishwasher. So how does one become or emulate the trait of being reliable, one who is predictable, punctua—“reproducible,” if you will?

Organizational reliability has received a fair bit of press these days. The industries that have come to embrace reliability concepts are those in which failure is easy to come by, and those in which failure is likely to be catastrophic if it occurs. In the medical industry, failure occurs to people, not widgets or machines, so by definition it tends to be catastrophic. These failures generally come in three flavors:

What they and others have found in their research of highly reliable organizations is not that they have failure-free operations, but that they continuously and “mindfully” think about how to be failure-free. Inattention and complacency are the biggest threats to reliability.

  • The expected fails to occur (i.e. a patient with pneumonia does not receive their antibiotics on time);
  • The unexpected occurs (i.e. a patient falls and breaks their hip); or
  • The unexpected was not previously thought of (i.e. low-risk patient has a PEA arrest).

A fair bit of research has been done on how organizations can become more reliable. In their book “Managing the Unexpected: Assuring High Performance in an Age of Complexity,”1 Karl Weick and Kathleen Sutcliffe studied firefighting, workers on aircraft carriers, and nuclear power plant employees. They all have in common the fundamental similarity that failure in their workplace is catastrophically dangerous, and that they must continuously strive to reduce the risk and/or mitigate effectively. The Agency for Healthcare Research and Quality (AHRQ) specifically studied, through case studies and site visits, how some healthcare organizations have achieved some success in the different domains of reliability.2

What both studies found is that there are five prerequisites that, if done well, lead to an organizational “state of mindfulness.” What they and others have found in their research of highly reliable organizations (HROs) is not that they have failure-free operations, but that they continuously and “mindfully” think about how to be failure-free. Inattention and complacency are the biggest threats to reliability.

The Fundamentals

The first prerequisite is sensitivity to operations. This refers to actively seeking information on how things actually are working, instead of how they are supposed to be working. It is being acutely aware of all operations, including the smallest details: Does the patient have an armband on? Is the nurse washing their hands? Is the whiteboard information correct? Is the bed alarm enabled? It is the state of mind when everyone knows how things should work, look, feel, sound, and can recognize when something is out of bounds.

The next prerequisite is a preoccupation with failure. This refers to a system in which failure and near-misses are completely transparent, and openly and honestly discussed (without inciting individual blame or punitive action), and learned from communally. This “group thought” continually reaffirms the fact that systems, and everyone in them, are completely fallible to errors. It is the complete opposite of inattention and complacency. It is continuously asking “What can go wrong, how can it go wrong, when will it go wrong, and how can I stop it?”

The next prerequisite is reluctance to oversimplify. This does not imply that simplicity is bad, but that oversimplicity is lethal. It forces people and organizations to avoid shortcuts and to not rely on simplistic explanations for situations that need to be complicated. Think of this as making a complicated soufflé; if you leave out a step or an ingredient, the product will be far from a soufflé.

 

 

The next prerequisite is deference to expertise. This principle recognizes that authority and/or rank are not equivalent to expertise. This assumes that people and organizations are willing and able to defer decision-making to the person who will make the best decision, not to who ranks highest in the organizational chart. A junior hospitalist might be much more likely to make a good decision on building a new order set than the hospitalist director is.

The last prerequisite is resilience. Webster’s defines resilience as “the capability of a strained body to recover its size and shape after deformation caused especially by compressive stress … an ability to recover from or adjust easily to misfortune or change.” The “compressive stresses” and “misfortune or change” can present in a number of different ways, including bad patient outcomes, bad national press, or bad hospital rankings. A good HRO is not one that does not experience unexpected events, but is one that is not disabled by them. They routinely train and practice for worst-case scenarios. It is easy to “audit” resilience by looking at the organizational response to unexpected events. Are they handled with grace, ease, and speed, or with panic, anxiety, and ongoing uncertainty? Resilience involves adequately functioning despite adversity, recovering well, and learning from the experience.

Take-Home Message

The first three principles relate to how organizations can anticipate and reduce the risk of failure; the last two principles relate to how organizations mitigate the extent or severity of failure when it occurs. Together, they create the state of mindfulness, in which all senses are open and alert for signs of aberrations in the system, and where there is continuous learning of how to make the system function better.

What does this mean for a hospitalist to function in an HRO? Most hospitalists are on the front lines, where they routinely see where and how things can fail. They need to resist the urge to become complacent and remain continuously alert to signals that the system is not functioning for the safety of the patient. And when things do go awry, they need to be part of the resilience plan, to work with their teams to swiftly and effectively mitigate ongoing risks, and defer decision to expertise and not necessarily authority.

It also requires that each of us work within multidisciplinary teams in which all members add to the “state of mindfulness,” including the patient and their families (who very often note “aberrancies” before anyone else does). Think of your hospital as ascribed by Gordon Bethune, the former CEO of Continental Airlines. When asked why all employees received a bonus for on-time departure (instead of only employees on the front line), he held up his wristwatch and said, “What part of this watch don’t you think we need?”

Hospitalists can be powerful motivators for a culture change that empowers all hospital employees to be engaged in anticipating and managing failures—just by being mindful. This is a great place to start.


Dr. Scheurer is a hospitalist and chief quality officer at the Medical University of South Carolina in Charleston. She is physician editor of The Hospitalist. Email her at [email protected].

References

  1. Weick KE, Sutcliffe KM. Managing the unexpected: Resilient performance in an age of uncertainty, 2nd ed. 2007: Hoboken, NJ: John Wiley & Sons Inc.
  2. Agency for Healthcare Research and Quality. Becoming a high reliability organization: operational advice for hospital leaders. Agency for Healthcare Research and Quality website. Available at: http://www.ahrq.gov/qual/hroadvice/. Accessed Dec. 10, 2012.

Reliability. This sounds like a decent trait. Who wouldn’t want to be described as “reliable”? It sounds reputable whether you’re a person, a car, or a dishwasher. So how does one become or emulate the trait of being reliable, one who is predictable, punctua—“reproducible,” if you will?

Organizational reliability has received a fair bit of press these days. The industries that have come to embrace reliability concepts are those in which failure is easy to come by, and those in which failure is likely to be catastrophic if it occurs. In the medical industry, failure occurs to people, not widgets or machines, so by definition it tends to be catastrophic. These failures generally come in three flavors:

What they and others have found in their research of highly reliable organizations is not that they have failure-free operations, but that they continuously and “mindfully” think about how to be failure-free. Inattention and complacency are the biggest threats to reliability.

  • The expected fails to occur (i.e. a patient with pneumonia does not receive their antibiotics on time);
  • The unexpected occurs (i.e. a patient falls and breaks their hip); or
  • The unexpected was not previously thought of (i.e. low-risk patient has a PEA arrest).

A fair bit of research has been done on how organizations can become more reliable. In their book “Managing the Unexpected: Assuring High Performance in an Age of Complexity,”1 Karl Weick and Kathleen Sutcliffe studied firefighting, workers on aircraft carriers, and nuclear power plant employees. They all have in common the fundamental similarity that failure in their workplace is catastrophically dangerous, and that they must continuously strive to reduce the risk and/or mitigate effectively. The Agency for Healthcare Research and Quality (AHRQ) specifically studied, through case studies and site visits, how some healthcare organizations have achieved some success in the different domains of reliability.2

What both studies found is that there are five prerequisites that, if done well, lead to an organizational “state of mindfulness.” What they and others have found in their research of highly reliable organizations (HROs) is not that they have failure-free operations, but that they continuously and “mindfully” think about how to be failure-free. Inattention and complacency are the biggest threats to reliability.

The Fundamentals

The first prerequisite is sensitivity to operations. This refers to actively seeking information on how things actually are working, instead of how they are supposed to be working. It is being acutely aware of all operations, including the smallest details: Does the patient have an armband on? Is the nurse washing their hands? Is the whiteboard information correct? Is the bed alarm enabled? It is the state of mind when everyone knows how things should work, look, feel, sound, and can recognize when something is out of bounds.

The next prerequisite is a preoccupation with failure. This refers to a system in which failure and near-misses are completely transparent, and openly and honestly discussed (without inciting individual blame or punitive action), and learned from communally. This “group thought” continually reaffirms the fact that systems, and everyone in them, are completely fallible to errors. It is the complete opposite of inattention and complacency. It is continuously asking “What can go wrong, how can it go wrong, when will it go wrong, and how can I stop it?”

The next prerequisite is reluctance to oversimplify. This does not imply that simplicity is bad, but that oversimplicity is lethal. It forces people and organizations to avoid shortcuts and to not rely on simplistic explanations for situations that need to be complicated. Think of this as making a complicated soufflé; if you leave out a step or an ingredient, the product will be far from a soufflé.

 

 

The next prerequisite is deference to expertise. This principle recognizes that authority and/or rank are not equivalent to expertise. This assumes that people and organizations are willing and able to defer decision-making to the person who will make the best decision, not to who ranks highest in the organizational chart. A junior hospitalist might be much more likely to make a good decision on building a new order set than the hospitalist director is.

The last prerequisite is resilience. Webster’s defines resilience as “the capability of a strained body to recover its size and shape after deformation caused especially by compressive stress … an ability to recover from or adjust easily to misfortune or change.” The “compressive stresses” and “misfortune or change” can present in a number of different ways, including bad patient outcomes, bad national press, or bad hospital rankings. A good HRO is not one that does not experience unexpected events, but is one that is not disabled by them. They routinely train and practice for worst-case scenarios. It is easy to “audit” resilience by looking at the organizational response to unexpected events. Are they handled with grace, ease, and speed, or with panic, anxiety, and ongoing uncertainty? Resilience involves adequately functioning despite adversity, recovering well, and learning from the experience.

Take-Home Message

The first three principles relate to how organizations can anticipate and reduce the risk of failure; the last two principles relate to how organizations mitigate the extent or severity of failure when it occurs. Together, they create the state of mindfulness, in which all senses are open and alert for signs of aberrations in the system, and where there is continuous learning of how to make the system function better.

What does this mean for a hospitalist to function in an HRO? Most hospitalists are on the front lines, where they routinely see where and how things can fail. They need to resist the urge to become complacent and remain continuously alert to signals that the system is not functioning for the safety of the patient. And when things do go awry, they need to be part of the resilience plan, to work with their teams to swiftly and effectively mitigate ongoing risks, and defer decision to expertise and not necessarily authority.

It also requires that each of us work within multidisciplinary teams in which all members add to the “state of mindfulness,” including the patient and their families (who very often note “aberrancies” before anyone else does). Think of your hospital as ascribed by Gordon Bethune, the former CEO of Continental Airlines. When asked why all employees received a bonus for on-time departure (instead of only employees on the front line), he held up his wristwatch and said, “What part of this watch don’t you think we need?”

Hospitalists can be powerful motivators for a culture change that empowers all hospital employees to be engaged in anticipating and managing failures—just by being mindful. This is a great place to start.


Dr. Scheurer is a hospitalist and chief quality officer at the Medical University of South Carolina in Charleston. She is physician editor of The Hospitalist. Email her at [email protected].

References

  1. Weick KE, Sutcliffe KM. Managing the unexpected: Resilient performance in an age of uncertainty, 2nd ed. 2007: Hoboken, NJ: John Wiley & Sons Inc.
  2. Agency for Healthcare Research and Quality. Becoming a high reliability organization: operational advice for hospital leaders. Agency for Healthcare Research and Quality website. Available at: http://www.ahrq.gov/qual/hroadvice/. Accessed Dec. 10, 2012.
Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Article Type
Display Headline
Fundamentals of Highly Reliable Organizations Could Benefit Hospitalists
Display Headline
Fundamentals of Highly Reliable Organizations Could Benefit Hospitalists
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Hospital Medicine Experts Outline Criteria To Consider Before Growing Your Group

Article Type
Changed
Fri, 09/14/2018 - 12:20
Display Headline
Hospital Medicine Experts Outline Criteria To Consider Before Growing Your Group

Job One is always patient safety and physician sanity. If you are careful about growth and buy-in, and you do the committee work and support everybody so that you’re firmly entrenched in the hospital as a value, it’s much safer to grow. Growing for the sake of growing, you risk overexpansion, and that’s dangerous.

—Brian Hazen, MD, medical director, Inova Fairfax Hospital Group, Fairfax, Va.

Ilan Alhadeff, MD, SFHM, program medical director for Cogent HMG at Hackensack University Medical Center in Hackensack, N.J., pays a lot of attention to the work relative-value units (wRVUs) his hospitalists are producing and the number of encounters they’re tallying. But he’s not particularly worried about what he sees on a daily, weekly, or even monthly basis; he takes a monthslong view of his data when he wants to forecast whether he is going to need to think about adding staff.

“When you look at months, you can start seeing trends,” Dr. Alhadeff says. “Let’s say there’s 16 to 18 average encounters. If your average is 16, you’re saying, ‘OK, you’re on the lower end of your normal.’ And if your average is 18, you’re on the higher end of normal. But if you start seeing 18 every month, odds are you’re going to start getting to 19. So at that point, that’s raising the thought that we need to start thinking about bringing someone else on.”

Dr. Alhadeff

It’s a dance HM group leaders around the country have to do when confronted with the age-old question: Should we expand our service? The answer is more art than science, experts say, as there is no standardized formula for knowing when your HM group should request more support from administration to add an FTE—or two or three. And, in a nod to the HM adage that if you’ve seen one HM group (HMG), then you’ve seen one HMG, the roadmap to expansion varies from place to place. But in a series of interviews with The Hospitalist, physicians, consultants, and management experts suggest there are broad themes that guide the process, including:

  • Data. Dashboard metrics, such as average daily census (ADC), wRVUs, patient encounters, and length of stay (LOS), must be quantified. No discussion on expansion can be intelligibly made without a firm understanding of where a practice currently stands.
  • Benchmarking. Collating figures isn’t enough. Measure your group against other local HMGs, regional groups, and national standards. SHM’s 2012 State of Hospital Medicine report is a good place to start.
  • Scope or schedule. Pushing into new business lines (e.g. orthopedic comanagement) often requires new staff, as does adding shifts to provide 24-hour on-site coverage. Those arguments are different from the case to be made for expanding based on increased patient encounters.
  • Physician buy-in. Group leaders cannot unilaterally determine it’s time to add staff, particularly in small-group settings in which hiring a new physician means taking revenue away from the existing group, if only in the short term. Talk with group members before embarking on expansion. Keep track of physician turnover. If hospitalists are leaving often, it could be a sign the group is understaffed.
  • Administrative buy-in. If a group leader’s request for a new hire comes without months of conversation ahead of it, it’s likely too late. Prepare C-suite executives in advance about potential growth needs so the discussion does not feel like a surprise.
  • Know your market. Don’t wait until a new active-adult community floods the hospital with patients to begin analyzing the impact new residents might have. The same goes for companies that are bringing thousands of new workers to an area.
  • Prepare to do nothing. Too often, group leaders think the easiest solution is hiring a physician to lessen workload. Instead, exhaust improved efficiency options and infrastructure improvements that could accomplish the same goal.
 

 

“There is no one specific measure,” says Burke Kealey, MD, SFHM, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn., and an SHM board member. “You have to look at it from several different aspects, and all or most need to line up and say that, yes, you could use more help.”

Practice Analysis

Dr. Kealey, board liaison to SHM’s Practice Analysis Committee, says that benchmarking might be among the most important first steps in determining the right time to grow a practice. Group leaders should keep in mind, though, that comparative analysis to outside measures is only step one of gauging a group’s performance.

“The external benchmarking is easy,” he says. “You can look at SHM survey data. There are a lot of places that will do local market surveys; that’s easy stuff to look at. It’s the internal stuff that’s a bit harder to make the case for, ‘OK, yes, I am a little below the national benchmarks, but here’s why.’”

Dr. Kealey

In those instances, group leaders need to “look at the value equation” and engage hospital administrators in a discussion on why such metrics as wRVUs and ADC might not match local, regional, or national standards. Perhaps a hospital has a lower payor mix than the sample pool, or comparable regional institutions have a better mix of medical and surgical comanagement populations. Regardless of the details of the tailored explanation, the conversation must be one that’s ongoing between a group leader and the C-suite or it is likely to fail, Dr. Kealey says.

“It really gets to the partnership between the hospital and the hospitalist group and working together throughout the whole year, and not just looking at staffing needs, but looking at the hospital’s quality,” he adds. “It’s looking at [the hospital’s] ability to retain the surgeons and the specialists. It’s the leadership that you’re providing. It’s showing that you’re a real partner, so that when it does come time to make that value argument, that we need to grow...there is buy-in.

“If you’re not a true partner and you just come in as an adversary, I think your odds of success are not very high.”

Dr. Sloan

Steve Sloan, MD, a partner at AIM Hospitalist Group of Westmont, Ill., says that group leaders would be wise to obtain input from all of their physicians before adding a new doctor, as each new hire impacts compensation for existing staff members. In Dr. Sloan’s 16-member group, 11 physicians are partners who discuss growth plans. The other doctors are on partnership tracks. And while that makes discussions more difficult than when nine physicians formed the group in 2007, up-front dialogue is crucial, Dr. Sloan says.

“We try to get all the partners together to make major decisions, such as hiring,” he says. “We don’t need everyone involved in every decision, but it’s not just one or two people making the decision.”

The conversation about growth also differs if new hires are needed to move the group into a new business line or if the group is adding staff to deal with its current patient load. Both require a business case for expansion to be made, but either way, codifying expectations with hospital clients is another way to streamline the growth process, says Dr. Alhadeff. His group contracts with his hospital to provide services and has the ability to autonomously add or delete staff as needed. Although personnel moves don’t require prior approval from the hospital, there is “an expected fiscal responsibility on our end and predetermined agreement do so.”

 

 

The group also keeps administrative stakeholders updated to make sure everyone is on the same page. Other groups might delineate in a contract what thresholds need to be met for expansion to be viable.

“It needs to be agreed upon,” Dr. Alhadeff says. “I like the flexibility of being able to determine within our company what we’re doing. But in answer to that, there are unintentional consequences. If we determine that we’re going to bring on someone else, and then we see after a few months that there is not enough volume to support this new physician, we could run into a problem. We will then have to make a financial decision, and the worst thing is to have to fire someone.”

Dr. Alhadeff also worries about the flipside: failing to hire when staff is overworked.

“We run that risk also,” he says. “We are walking a tightrope all the time, and we need to balance that tightrope.”

When you’re putting out fires every day, you don’t have the luxury and the time to look out there and see what’s happening and know everything that’s going on. [Group leaders] need to understand the importance of [long-term analysis] and how all the pieces tie in together.

—Kenneth Hertz, FACMPE, principal, Medical Group Management Association Health Care Consulting Group, Denver

The Long View

Another tightrope is timing. Kenneth Hertz, FACMPE, principal of the Medical Group Management Association’s Health Care Consulting Group, says that it can take six months or longer to hire a physician, which means group leaders need to have a continual focus on whether growth is needed or will soon be needed. He suggests forecasting at least 12 to 18 months in advance to stay ahead of staffing needs.

Unfortunately, he says, analysis often gets put on hold in the shuffle of dealing with daily duties. “This is kind of generic to practice administrators, who are putting out fires almost every day. And when you’re putting out fires every day, you don’t have the luxury and the time to look out there and see what’s happening and know everything that’s going on,” he says. “They need to understand the importance of it and how all the pieces tie in together.”

Brian Hazen, MD, medical director of Inova Fairfax Hospital Group in Fairfax, Va., says an important approach is to realize growth isn’t always a good thing. HM group leaders often want to grow before they have stabilized their existing business lines, he says, and that can be the worst tack to take. He also notes that a group leader should ingratiate their program into the fabric of their hospital and not just rely on data to make the argument of the group’s value. That means putting hospitalists on committees, spearheading safety programs, and being seen as a partner in the institution.

“Job One is always patient safety and physician sanity,” he says. “If you are careful about growth and buy-in, and you do the committee work and support everybody so that you’re firmly entrenched in the hospital as a value, it’s much safer to grow. Growing for the sake of growing, you risk overexpansion, and that’s dangerous.”

Many hospitalist groups looking to grow will use locum tenens to bridge the staffing gap while they hire new employees (see “No Strings Attached,” December 2012, p. 36), but Dr. Hazen says without a longer view, that only serves as a Band-Aid.

Hertz, the consultant, often uses an analogy to show how important it is to be constantly planning ahead of the growth curve.

 

 

“It is a little bit like building roads,” he says. “Once you decide you need to add two lanes, by the time those are finished, you realize we really need to add two more lanes.”


Richard Quinn is a freelance writer in New Jersey.

Issue
The Hospitalist - 2013(02)
Publications
Topics
Sections

Job One is always patient safety and physician sanity. If you are careful about growth and buy-in, and you do the committee work and support everybody so that you’re firmly entrenched in the hospital as a value, it’s much safer to grow. Growing for the sake of growing, you risk overexpansion, and that’s dangerous.

—Brian Hazen, MD, medical director, Inova Fairfax Hospital Group, Fairfax, Va.

Ilan Alhadeff, MD, SFHM, program medical director for Cogent HMG at Hackensack University Medical Center in Hackensack, N.J., pays a lot of attention to the work relative-value units (wRVUs) his hospitalists are producing and the number of encounters they’re tallying. But he’s not particularly worried about what he sees on a daily, weekly, or even monthly basis; he takes a monthslong view of his data when he wants to forecast whether he is going to need to think about adding staff.

“When you look at months, you can start seeing trends,” Dr. Alhadeff says. “Let’s say there’s 16 to 18 average encounters. If your average is 16, you’re saying, ‘OK, you’re on the lower end of your normal.’ And if your average is 18, you’re on the higher end of normal. But if you start seeing 18 every month, odds are you’re going to start getting to 19. So at that point, that’s raising the thought that we need to start thinking about bringing someone else on.”

Dr. Alhadeff

It’s a dance HM group leaders around the country have to do when confronted with the age-old question: Should we expand our service? The answer is more art than science, experts say, as there is no standardized formula for knowing when your HM group should request more support from administration to add an FTE—or two or three. And, in a nod to the HM adage that if you’ve seen one HM group (HMG), then you’ve seen one HMG, the roadmap to expansion varies from place to place. But in a series of interviews with The Hospitalist, physicians, consultants, and management experts suggest there are broad themes that guide the process, including:

  • Data. Dashboard metrics, such as average daily census (ADC), wRVUs, patient encounters, and length of stay (LOS), must be quantified. No discussion on expansion can be intelligibly made without a firm understanding of where a practice currently stands.
  • Benchmarking. Collating figures isn’t enough. Measure your group against other local HMGs, regional groups, and national standards. SHM’s 2012 State of Hospital Medicine report is a good place to start.
  • Scope or schedule. Pushing into new business lines (e.g. orthopedic comanagement) often requires new staff, as does adding shifts to provide 24-hour on-site coverage. Those arguments are different from the case to be made for expanding based on increased patient encounters.
  • Physician buy-in. Group leaders cannot unilaterally determine it’s time to add staff, particularly in small-group settings in which hiring a new physician means taking revenue away from the existing group, if only in the short term. Talk with group members before embarking on expansion. Keep track of physician turnover. If hospitalists are leaving often, it could be a sign the group is understaffed.
  • Administrative buy-in. If a group leader’s request for a new hire comes without months of conversation ahead of it, it’s likely too late. Prepare C-suite executives in advance about potential growth needs so the discussion does not feel like a surprise.
  • Know your market. Don’t wait until a new active-adult community floods the hospital with patients to begin analyzing the impact new residents might have. The same goes for companies that are bringing thousands of new workers to an area.
  • Prepare to do nothing. Too often, group leaders think the easiest solution is hiring a physician to lessen workload. Instead, exhaust improved efficiency options and infrastructure improvements that could accomplish the same goal.
 

 

“There is no one specific measure,” says Burke Kealey, MD, SFHM, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn., and an SHM board member. “You have to look at it from several different aspects, and all or most need to line up and say that, yes, you could use more help.”

Practice Analysis

Dr. Kealey, board liaison to SHM’s Practice Analysis Committee, says that benchmarking might be among the most important first steps in determining the right time to grow a practice. Group leaders should keep in mind, though, that comparative analysis to outside measures is only step one of gauging a group’s performance.

“The external benchmarking is easy,” he says. “You can look at SHM survey data. There are a lot of places that will do local market surveys; that’s easy stuff to look at. It’s the internal stuff that’s a bit harder to make the case for, ‘OK, yes, I am a little below the national benchmarks, but here’s why.’”

Dr. Kealey

In those instances, group leaders need to “look at the value equation” and engage hospital administrators in a discussion on why such metrics as wRVUs and ADC might not match local, regional, or national standards. Perhaps a hospital has a lower payor mix than the sample pool, or comparable regional institutions have a better mix of medical and surgical comanagement populations. Regardless of the details of the tailored explanation, the conversation must be one that’s ongoing between a group leader and the C-suite or it is likely to fail, Dr. Kealey says.

“It really gets to the partnership between the hospital and the hospitalist group and working together throughout the whole year, and not just looking at staffing needs, but looking at the hospital’s quality,” he adds. “It’s looking at [the hospital’s] ability to retain the surgeons and the specialists. It’s the leadership that you’re providing. It’s showing that you’re a real partner, so that when it does come time to make that value argument, that we need to grow...there is buy-in.

“If you’re not a true partner and you just come in as an adversary, I think your odds of success are not very high.”

Dr. Sloan

Steve Sloan, MD, a partner at AIM Hospitalist Group of Westmont, Ill., says that group leaders would be wise to obtain input from all of their physicians before adding a new doctor, as each new hire impacts compensation for existing staff members. In Dr. Sloan’s 16-member group, 11 physicians are partners who discuss growth plans. The other doctors are on partnership tracks. And while that makes discussions more difficult than when nine physicians formed the group in 2007, up-front dialogue is crucial, Dr. Sloan says.

“We try to get all the partners together to make major decisions, such as hiring,” he says. “We don’t need everyone involved in every decision, but it’s not just one or two people making the decision.”

The conversation about growth also differs if new hires are needed to move the group into a new business line or if the group is adding staff to deal with its current patient load. Both require a business case for expansion to be made, but either way, codifying expectations with hospital clients is another way to streamline the growth process, says Dr. Alhadeff. His group contracts with his hospital to provide services and has the ability to autonomously add or delete staff as needed. Although personnel moves don’t require prior approval from the hospital, there is “an expected fiscal responsibility on our end and predetermined agreement do so.”

 

 

The group also keeps administrative stakeholders updated to make sure everyone is on the same page. Other groups might delineate in a contract what thresholds need to be met for expansion to be viable.

“It needs to be agreed upon,” Dr. Alhadeff says. “I like the flexibility of being able to determine within our company what we’re doing. But in answer to that, there are unintentional consequences. If we determine that we’re going to bring on someone else, and then we see after a few months that there is not enough volume to support this new physician, we could run into a problem. We will then have to make a financial decision, and the worst thing is to have to fire someone.”

Dr. Alhadeff also worries about the flipside: failing to hire when staff is overworked.

“We run that risk also,” he says. “We are walking a tightrope all the time, and we need to balance that tightrope.”

When you’re putting out fires every day, you don’t have the luxury and the time to look out there and see what’s happening and know everything that’s going on. [Group leaders] need to understand the importance of [long-term analysis] and how all the pieces tie in together.

—Kenneth Hertz, FACMPE, principal, Medical Group Management Association Health Care Consulting Group, Denver

The Long View

Another tightrope is timing. Kenneth Hertz, FACMPE, principal of the Medical Group Management Association’s Health Care Consulting Group, says that it can take six months or longer to hire a physician, which means group leaders need to have a continual focus on whether growth is needed or will soon be needed. He suggests forecasting at least 12 to 18 months in advance to stay ahead of staffing needs.

Unfortunately, he says, analysis often gets put on hold in the shuffle of dealing with daily duties. “This is kind of generic to practice administrators, who are putting out fires almost every day. And when you’re putting out fires every day, you don’t have the luxury and the time to look out there and see what’s happening and know everything that’s going on,” he says. “They need to understand the importance of it and how all the pieces tie in together.”

Brian Hazen, MD, medical director of Inova Fairfax Hospital Group in Fairfax, Va., says an important approach is to realize growth isn’t always a good thing. HM group leaders often want to grow before they have stabilized their existing business lines, he says, and that can be the worst tack to take. He also notes that a group leader should ingratiate their program into the fabric of their hospital and not just rely on data to make the argument of the group’s value. That means putting hospitalists on committees, spearheading safety programs, and being seen as a partner in the institution.

“Job One is always patient safety and physician sanity,” he says. “If you are careful about growth and buy-in, and you do the committee work and support everybody so that you’re firmly entrenched in the hospital as a value, it’s much safer to grow. Growing for the sake of growing, you risk overexpansion, and that’s dangerous.”

Many hospitalist groups looking to grow will use locum tenens to bridge the staffing gap while they hire new employees (see “No Strings Attached,” December 2012, p. 36), but Dr. Hazen says without a longer view, that only serves as a Band-Aid.

Hertz, the consultant, often uses an analogy to show how important it is to be constantly planning ahead of the growth curve.

 

 

“It is a little bit like building roads,” he says. “Once you decide you need to add two lanes, by the time those are finished, you realize we really need to add two more lanes.”


Richard Quinn is a freelance writer in New Jersey.

Job One is always patient safety and physician sanity. If you are careful about growth and buy-in, and you do the committee work and support everybody so that you’re firmly entrenched in the hospital as a value, it’s much safer to grow. Growing for the sake of growing, you risk overexpansion, and that’s dangerous.

—Brian Hazen, MD, medical director, Inova Fairfax Hospital Group, Fairfax, Va.

Ilan Alhadeff, MD, SFHM, program medical director for Cogent HMG at Hackensack University Medical Center in Hackensack, N.J., pays a lot of attention to the work relative-value units (wRVUs) his hospitalists are producing and the number of encounters they’re tallying. But he’s not particularly worried about what he sees on a daily, weekly, or even monthly basis; he takes a monthslong view of his data when he wants to forecast whether he is going to need to think about adding staff.

“When you look at months, you can start seeing trends,” Dr. Alhadeff says. “Let’s say there’s 16 to 18 average encounters. If your average is 16, you’re saying, ‘OK, you’re on the lower end of your normal.’ And if your average is 18, you’re on the higher end of normal. But if you start seeing 18 every month, odds are you’re going to start getting to 19. So at that point, that’s raising the thought that we need to start thinking about bringing someone else on.”

Dr. Alhadeff

It’s a dance HM group leaders around the country have to do when confronted with the age-old question: Should we expand our service? The answer is more art than science, experts say, as there is no standardized formula for knowing when your HM group should request more support from administration to add an FTE—or two or three. And, in a nod to the HM adage that if you’ve seen one HM group (HMG), then you’ve seen one HMG, the roadmap to expansion varies from place to place. But in a series of interviews with The Hospitalist, physicians, consultants, and management experts suggest there are broad themes that guide the process, including:

  • Data. Dashboard metrics, such as average daily census (ADC), wRVUs, patient encounters, and length of stay (LOS), must be quantified. No discussion on expansion can be intelligibly made without a firm understanding of where a practice currently stands.
  • Benchmarking. Collating figures isn’t enough. Measure your group against other local HMGs, regional groups, and national standards. SHM’s 2012 State of Hospital Medicine report is a good place to start.
  • Scope or schedule. Pushing into new business lines (e.g. orthopedic comanagement) often requires new staff, as does adding shifts to provide 24-hour on-site coverage. Those arguments are different from the case to be made for expanding based on increased patient encounters.
  • Physician buy-in. Group leaders cannot unilaterally determine it’s time to add staff, particularly in small-group settings in which hiring a new physician means taking revenue away from the existing group, if only in the short term. Talk with group members before embarking on expansion. Keep track of physician turnover. If hospitalists are leaving often, it could be a sign the group is understaffed.
  • Administrative buy-in. If a group leader’s request for a new hire comes without months of conversation ahead of it, it’s likely too late. Prepare C-suite executives in advance about potential growth needs so the discussion does not feel like a surprise.
  • Know your market. Don’t wait until a new active-adult community floods the hospital with patients to begin analyzing the impact new residents might have. The same goes for companies that are bringing thousands of new workers to an area.
  • Prepare to do nothing. Too often, group leaders think the easiest solution is hiring a physician to lessen workload. Instead, exhaust improved efficiency options and infrastructure improvements that could accomplish the same goal.
 

 

“There is no one specific measure,” says Burke Kealey, MD, SFHM, medical director of hospital specialties at HealthPartners Medical Group in St. Paul, Minn., and an SHM board member. “You have to look at it from several different aspects, and all or most need to line up and say that, yes, you could use more help.”

Practice Analysis

Dr. Kealey, board liaison to SHM’s Practice Analysis Committee, says that benchmarking might be among the most important first steps in determining the right time to grow a practice. Group leaders should keep in mind, though, that comparative analysis to outside measures is only step one of gauging a group’s performance.

“The external benchmarking is easy,” he says. “You can look at SHM survey data. There are a lot of places that will do local market surveys; that’s easy stuff to look at. It’s the internal stuff that’s a bit harder to make the case for, ‘OK, yes, I am a little below the national benchmarks, but here’s why.’”

Dr. Kealey

In those instances, group leaders need to “look at the value equation” and engage hospital administrators in a discussion on why such metrics as wRVUs and ADC might not match local, regional, or national standards. Perhaps a hospital has a lower payor mix than the sample pool, or comparable regional institutions have a better mix of medical and surgical comanagement populations. Regardless of the details of the tailored explanation, the conversation must be one that’s ongoing between a group leader and the C-suite or it is likely to fail, Dr. Kealey says.

“It really gets to the partnership between the hospital and the hospitalist group and working together throughout the whole year, and not just looking at staffing needs, but looking at the hospital’s quality,” he adds. “It’s looking at [the hospital’s] ability to retain the surgeons and the specialists. It’s the leadership that you’re providing. It’s showing that you’re a real partner, so that when it does come time to make that value argument, that we need to grow...there is buy-in.

“If you’re not a true partner and you just come in as an adversary, I think your odds of success are not very high.”

Dr. Sloan

Steve Sloan, MD, a partner at AIM Hospitalist Group of Westmont, Ill., says that group leaders would be wise to obtain input from all of their physicians before adding a new doctor, as each new hire impacts compensation for existing staff members. In Dr. Sloan’s 16-member group, 11 physicians are partners who discuss growth plans. The other doctors are on partnership tracks. And while that makes discussions more difficult than when nine physicians formed the group in 2007, up-front dialogue is crucial, Dr. Sloan says.

“We try to get all the partners together to make major decisions, such as hiring,” he says. “We don’t need everyone involved in every decision, but it’s not just one or two people making the decision.”

The conversation about growth also differs if new hires are needed to move the group into a new business line or if the group is adding staff to deal with its current patient load. Both require a business case for expansion to be made, but either way, codifying expectations with hospital clients is another way to streamline the growth process, says Dr. Alhadeff. His group contracts with his hospital to provide services and has the ability to autonomously add or delete staff as needed. Although personnel moves don’t require prior approval from the hospital, there is “an expected fiscal responsibility on our end and predetermined agreement do so.”

 

 

The group also keeps administrative stakeholders updated to make sure everyone is on the same page. Other groups might delineate in a contract what thresholds need to be met for expansion to be viable.

“It needs to be agreed upon,” Dr. Alhadeff says. “I like the flexibility of being able to determine within our company what we’re doing. But in answer to that, there are unintentional consequences. If we determine that we’re going to bring on someone else, and then we see after a few months that there is not enough volume to support this new physician, we could run into a problem. We will then have to make a financial decision, and the worst thing is to have to fire someone.”

Dr. Alhadeff also worries about the flipside: failing to hire when staff is overworked.

“We run that risk also,” he says. “We are walking a tightrope all the time, and we need to balance that tightrope.”

When you’re putting out fires every day, you don’t have the luxury and the time to look out there and see what’s happening and know everything that’s going on. [Group leaders] need to understand the importance of [long-term analysis] and how all the pieces tie in together.

—Kenneth Hertz, FACMPE, principal, Medical Group Management Association Health Care Consulting Group, Denver

The Long View

Another tightrope is timing. Kenneth Hertz, FACMPE, principal of the Medical Group Management Association’s Health Care Consulting Group, says that it can take six months or longer to hire a physician, which means group leaders need to have a continual focus on whether growth is needed or will soon be needed. He suggests forecasting at least 12 to 18 months in advance to stay ahead of staffing needs.

Unfortunately, he says, analysis often gets put on hold in the shuffle of dealing with daily duties. “This is kind of generic to practice administrators, who are putting out fires almost every day. And when you’re putting out fires every day, you don’t have the luxury and the time to look out there and see what’s happening and know everything that’s going on,” he says. “They need to understand the importance of it and how all the pieces tie in together.”

Brian Hazen, MD, medical director of Inova Fairfax Hospital Group in Fairfax, Va., says an important approach is to realize growth isn’t always a good thing. HM group leaders often want to grow before they have stabilized their existing business lines, he says, and that can be the worst tack to take. He also notes that a group leader should ingratiate their program into the fabric of their hospital and not just rely on data to make the argument of the group’s value. That means putting hospitalists on committees, spearheading safety programs, and being seen as a partner in the institution.

“Job One is always patient safety and physician sanity,” he says. “If you are careful about growth and buy-in, and you do the committee work and support everybody so that you’re firmly entrenched in the hospital as a value, it’s much safer to grow. Growing for the sake of growing, you risk overexpansion, and that’s dangerous.”

Many hospitalist groups looking to grow will use locum tenens to bridge the staffing gap while they hire new employees (see “No Strings Attached,” December 2012, p. 36), but Dr. Hazen says without a longer view, that only serves as a Band-Aid.

Hertz, the consultant, often uses an analogy to show how important it is to be constantly planning ahead of the growth curve.

 

 

“It is a little bit like building roads,” he says. “Once you decide you need to add two lanes, by the time those are finished, you realize we really need to add two more lanes.”


Richard Quinn is a freelance writer in New Jersey.

Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Topics
Article Type
Display Headline
Hospital Medicine Experts Outline Criteria To Consider Before Growing Your Group
Display Headline
Hospital Medicine Experts Outline Criteria To Consider Before Growing Your Group
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

John Nelson: Why Spinal Epidural Abcess Poses A Particular Liability Risk for Hospitalists

Article Type
Changed
Fri, 09/14/2018 - 12:20
Display Headline
John Nelson: Why Spinal Epidural Abcess Poses A Particular Liability Risk for Hospitalists

Delayed diagnosis of, or treatment for, a spinal epidural abscess (SEA): that will be the case over which you are sued.

Over the last 15 years, I’ve served as an expert witness for six or seven malpractice cases. Most were related to spinal cord injuries, and in all but one of those, the etiology was epidural abscess. I’ve been asked to review about 40 or 50 additional cases, and while I’ve turned them down (I just don’t have time to do reviews), I nearly always ask about the clinical picture in every case. A significant number have been SEA-related. This experience has convinced me that SEA poses a particular liability risk for hospitalists.

Of course, it is patients who bear the real risk and unfortunate consequences of SEA. Being a defendant physician in a lawsuit is stressful, but it’s nothing compared to the distress of permanent loss of neurologic function. To prevent permanent sequelae, we need to maintain a very high index of suspicion to try to make a prompt diagnosis, and ensure immediate intervention once the diagnosis is made.

Being a defendant physician in a lawsuit is stressful, but it's nothing compared to the distress of permanent loss of neurologic function.

Data from Malpractice Insurers

I had the pleasure of getting to know a number of leaders at The Doctor’s Company, a large malpractice insurer that provides malpractice policies for all specialties, including a lot of hospitalists. From 2007 to 2011, they closed 28 SEA-related claims, for which they spent an average of $212,000 defending each one. Eleven of the 28 resulted in indemnity payments averaging $754,000 each (median was $455,000). These dollar amounts are roughly double what might be seen for all other claims and reflect only the payments made on behalf of the company’s insured doctors. The total award to each patient was likely much higher, because in most cases, several defendants (other doctors and a hospital) probably paid money to the patient.

The Physician Insurers Association of America (PIAA) “is the insurance industry trade association representing domestic and international medical professional liability insurance companies.” Their member malpractice insurance companies have the opportunity to report claims data that PIAA aggregates and makes available. Data from 2002 to 2011 showed 312 closed claims related to any diagnosis (not just SEA) for hospitalists, with an average indemnity payment of $272,553 (the highest hospitalist-related payment was $1.4 million). The most common allegations related to paid claims were 1) “errors in diagnosis,” 2) “failure/delay in referral or consultation,” and 3) “failure to supervise/monitor case.” Although only three of the 312 claims were related to “diseases of the spinal cord,” that was exceeded in frequency only by “diabetes.”

I think these numbers from the malpractice insurance industry support my concern that SEA is a high-risk area, but it doesn’t really support my anecdotal experience that SEA is clearly hospitalists’ highest-risk area. Maybe SEA is only one of several high-risk areas. Nevertheless, I’m going to stick to my sensationalist guns to get your attention.

Why Is Epidural Abscess a High Risk?

There likely are several reasons SEA is a treacherous liability problem. It can lead to devastating permanent disabling neurologic deficits in people who were previously healthy, and if the medical care was substandard, then significant financial compensation seems appropriate.

Delays in diagnosis of SEA are common. It can be a very sneaky illness that in the early stages is very easy to confuse with less-serious causes of back pain or fever. Even though I think about this particular diagnosis all the time, just last year I had a patient who reported an increase in his usual back pain. I felt reassured that he had no neurologic deficit or fever, and took the time to explain why there was no reason to repeat the spine MRI that had been done about two weeks prior to admission. But he was insistent that he have another MRI, and after a day or two I finally agreed to order it, assuring him it would not explain the cause of his pain. But it did. He had a significant SEA and went to emergency surgery. I was stunned, and profoundly relieved that he had no neurologic sequelae.

 

 

One of the remarkable things I’ve seen in the cases I’ve reviewed is that even when there is clear cause for concern, there is too often no action taken. In a number of cases, the nurses’ note indicates increasing back pain, loss of ability to stand, urinary retention, and other alarming signs. Yet the doctors either never learn of these issues, or they choose to attribute them to other causes.

Even when the diagnosis of SEA is clearly established, it is all too common for doctors caring for the patient not to act on this information. In several cases I reviewed, a radiologist had documented reporting the diagnosis to the hospitalist (and in one case the neurosurgeon as well), yet nothing was done for 12 hours or more. It is hard to imagine that establishing this diagnosis doesn’t reliably lead to an emergent response, but it doesn’t. (In some cases, nonsurgical management may be an option, but in these malpractices cases, there was just a failure to act on the diagnosis with any sort of plan.)

Practice Management Perspective

I usually discuss hospitalist practice operations in this space—things like work schedules and compensation. But attending to risk management is one component of effective practice operations, so I thought I’d raise the topic here. Obviously, there is a lot more to hospitalist risk management than one diagnosis, but a column on the whole universe of risk management would probably serve no purpose other than as a sleep aid. I hope that by focusing solely on SEA, there is some chance that you’ll remember it, and you’ll make sure that you disprove my first two sentences.

Lowering your risk of a malpractice lawsuit is valuable and worth spending time on. But far more important is that by keeping the diagnosis in mind, and ensuring that you act emergently when there is cause for concern, you might save someone from the devastating consequences of this disease.


Dr. Nelson has been a practicing hospitalist since 1988. He is co-founder and past president of SHM, and principal in Nelson Flores Hospital Medicine Consultants. He is co-director for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. Write to him at [email protected].

Issue
The Hospitalist - 2013(02)
Publications
Sections

Delayed diagnosis of, or treatment for, a spinal epidural abscess (SEA): that will be the case over which you are sued.

Over the last 15 years, I’ve served as an expert witness for six or seven malpractice cases. Most were related to spinal cord injuries, and in all but one of those, the etiology was epidural abscess. I’ve been asked to review about 40 or 50 additional cases, and while I’ve turned them down (I just don’t have time to do reviews), I nearly always ask about the clinical picture in every case. A significant number have been SEA-related. This experience has convinced me that SEA poses a particular liability risk for hospitalists.

Of course, it is patients who bear the real risk and unfortunate consequences of SEA. Being a defendant physician in a lawsuit is stressful, but it’s nothing compared to the distress of permanent loss of neurologic function. To prevent permanent sequelae, we need to maintain a very high index of suspicion to try to make a prompt diagnosis, and ensure immediate intervention once the diagnosis is made.

Being a defendant physician in a lawsuit is stressful, but it's nothing compared to the distress of permanent loss of neurologic function.

Data from Malpractice Insurers

I had the pleasure of getting to know a number of leaders at The Doctor’s Company, a large malpractice insurer that provides malpractice policies for all specialties, including a lot of hospitalists. From 2007 to 2011, they closed 28 SEA-related claims, for which they spent an average of $212,000 defending each one. Eleven of the 28 resulted in indemnity payments averaging $754,000 each (median was $455,000). These dollar amounts are roughly double what might be seen for all other claims and reflect only the payments made on behalf of the company’s insured doctors. The total award to each patient was likely much higher, because in most cases, several defendants (other doctors and a hospital) probably paid money to the patient.

The Physician Insurers Association of America (PIAA) “is the insurance industry trade association representing domestic and international medical professional liability insurance companies.” Their member malpractice insurance companies have the opportunity to report claims data that PIAA aggregates and makes available. Data from 2002 to 2011 showed 312 closed claims related to any diagnosis (not just SEA) for hospitalists, with an average indemnity payment of $272,553 (the highest hospitalist-related payment was $1.4 million). The most common allegations related to paid claims were 1) “errors in diagnosis,” 2) “failure/delay in referral or consultation,” and 3) “failure to supervise/monitor case.” Although only three of the 312 claims were related to “diseases of the spinal cord,” that was exceeded in frequency only by “diabetes.”

I think these numbers from the malpractice insurance industry support my concern that SEA is a high-risk area, but it doesn’t really support my anecdotal experience that SEA is clearly hospitalists’ highest-risk area. Maybe SEA is only one of several high-risk areas. Nevertheless, I’m going to stick to my sensationalist guns to get your attention.

Why Is Epidural Abscess a High Risk?

There likely are several reasons SEA is a treacherous liability problem. It can lead to devastating permanent disabling neurologic deficits in people who were previously healthy, and if the medical care was substandard, then significant financial compensation seems appropriate.

Delays in diagnosis of SEA are common. It can be a very sneaky illness that in the early stages is very easy to confuse with less-serious causes of back pain or fever. Even though I think about this particular diagnosis all the time, just last year I had a patient who reported an increase in his usual back pain. I felt reassured that he had no neurologic deficit or fever, and took the time to explain why there was no reason to repeat the spine MRI that had been done about two weeks prior to admission. But he was insistent that he have another MRI, and after a day or two I finally agreed to order it, assuring him it would not explain the cause of his pain. But it did. He had a significant SEA and went to emergency surgery. I was stunned, and profoundly relieved that he had no neurologic sequelae.

 

 

One of the remarkable things I’ve seen in the cases I’ve reviewed is that even when there is clear cause for concern, there is too often no action taken. In a number of cases, the nurses’ note indicates increasing back pain, loss of ability to stand, urinary retention, and other alarming signs. Yet the doctors either never learn of these issues, or they choose to attribute them to other causes.

Even when the diagnosis of SEA is clearly established, it is all too common for doctors caring for the patient not to act on this information. In several cases I reviewed, a radiologist had documented reporting the diagnosis to the hospitalist (and in one case the neurosurgeon as well), yet nothing was done for 12 hours or more. It is hard to imagine that establishing this diagnosis doesn’t reliably lead to an emergent response, but it doesn’t. (In some cases, nonsurgical management may be an option, but in these malpractices cases, there was just a failure to act on the diagnosis with any sort of plan.)

Practice Management Perspective

I usually discuss hospitalist practice operations in this space—things like work schedules and compensation. But attending to risk management is one component of effective practice operations, so I thought I’d raise the topic here. Obviously, there is a lot more to hospitalist risk management than one diagnosis, but a column on the whole universe of risk management would probably serve no purpose other than as a sleep aid. I hope that by focusing solely on SEA, there is some chance that you’ll remember it, and you’ll make sure that you disprove my first two sentences.

Lowering your risk of a malpractice lawsuit is valuable and worth spending time on. But far more important is that by keeping the diagnosis in mind, and ensuring that you act emergently when there is cause for concern, you might save someone from the devastating consequences of this disease.


Dr. Nelson has been a practicing hospitalist since 1988. He is co-founder and past president of SHM, and principal in Nelson Flores Hospital Medicine Consultants. He is co-director for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. Write to him at [email protected].

Delayed diagnosis of, or treatment for, a spinal epidural abscess (SEA): that will be the case over which you are sued.

Over the last 15 years, I’ve served as an expert witness for six or seven malpractice cases. Most were related to spinal cord injuries, and in all but one of those, the etiology was epidural abscess. I’ve been asked to review about 40 or 50 additional cases, and while I’ve turned them down (I just don’t have time to do reviews), I nearly always ask about the clinical picture in every case. A significant number have been SEA-related. This experience has convinced me that SEA poses a particular liability risk for hospitalists.

Of course, it is patients who bear the real risk and unfortunate consequences of SEA. Being a defendant physician in a lawsuit is stressful, but it’s nothing compared to the distress of permanent loss of neurologic function. To prevent permanent sequelae, we need to maintain a very high index of suspicion to try to make a prompt diagnosis, and ensure immediate intervention once the diagnosis is made.

Being a defendant physician in a lawsuit is stressful, but it's nothing compared to the distress of permanent loss of neurologic function.

Data from Malpractice Insurers

I had the pleasure of getting to know a number of leaders at The Doctor’s Company, a large malpractice insurer that provides malpractice policies for all specialties, including a lot of hospitalists. From 2007 to 2011, they closed 28 SEA-related claims, for which they spent an average of $212,000 defending each one. Eleven of the 28 resulted in indemnity payments averaging $754,000 each (median was $455,000). These dollar amounts are roughly double what might be seen for all other claims and reflect only the payments made on behalf of the company’s insured doctors. The total award to each patient was likely much higher, because in most cases, several defendants (other doctors and a hospital) probably paid money to the patient.

The Physician Insurers Association of America (PIAA) “is the insurance industry trade association representing domestic and international medical professional liability insurance companies.” Their member malpractice insurance companies have the opportunity to report claims data that PIAA aggregates and makes available. Data from 2002 to 2011 showed 312 closed claims related to any diagnosis (not just SEA) for hospitalists, with an average indemnity payment of $272,553 (the highest hospitalist-related payment was $1.4 million). The most common allegations related to paid claims were 1) “errors in diagnosis,” 2) “failure/delay in referral or consultation,” and 3) “failure to supervise/monitor case.” Although only three of the 312 claims were related to “diseases of the spinal cord,” that was exceeded in frequency only by “diabetes.”

I think these numbers from the malpractice insurance industry support my concern that SEA is a high-risk area, but it doesn’t really support my anecdotal experience that SEA is clearly hospitalists’ highest-risk area. Maybe SEA is only one of several high-risk areas. Nevertheless, I’m going to stick to my sensationalist guns to get your attention.

Why Is Epidural Abscess a High Risk?

There likely are several reasons SEA is a treacherous liability problem. It can lead to devastating permanent disabling neurologic deficits in people who were previously healthy, and if the medical care was substandard, then significant financial compensation seems appropriate.

Delays in diagnosis of SEA are common. It can be a very sneaky illness that in the early stages is very easy to confuse with less-serious causes of back pain or fever. Even though I think about this particular diagnosis all the time, just last year I had a patient who reported an increase in his usual back pain. I felt reassured that he had no neurologic deficit or fever, and took the time to explain why there was no reason to repeat the spine MRI that had been done about two weeks prior to admission. But he was insistent that he have another MRI, and after a day or two I finally agreed to order it, assuring him it would not explain the cause of his pain. But it did. He had a significant SEA and went to emergency surgery. I was stunned, and profoundly relieved that he had no neurologic sequelae.

 

 

One of the remarkable things I’ve seen in the cases I’ve reviewed is that even when there is clear cause for concern, there is too often no action taken. In a number of cases, the nurses’ note indicates increasing back pain, loss of ability to stand, urinary retention, and other alarming signs. Yet the doctors either never learn of these issues, or they choose to attribute them to other causes.

Even when the diagnosis of SEA is clearly established, it is all too common for doctors caring for the patient not to act on this information. In several cases I reviewed, a radiologist had documented reporting the diagnosis to the hospitalist (and in one case the neurosurgeon as well), yet nothing was done for 12 hours or more. It is hard to imagine that establishing this diagnosis doesn’t reliably lead to an emergent response, but it doesn’t. (In some cases, nonsurgical management may be an option, but in these malpractices cases, there was just a failure to act on the diagnosis with any sort of plan.)

Practice Management Perspective

I usually discuss hospitalist practice operations in this space—things like work schedules and compensation. But attending to risk management is one component of effective practice operations, so I thought I’d raise the topic here. Obviously, there is a lot more to hospitalist risk management than one diagnosis, but a column on the whole universe of risk management would probably serve no purpose other than as a sleep aid. I hope that by focusing solely on SEA, there is some chance that you’ll remember it, and you’ll make sure that you disprove my first two sentences.

Lowering your risk of a malpractice lawsuit is valuable and worth spending time on. But far more important is that by keeping the diagnosis in mind, and ensuring that you act emergently when there is cause for concern, you might save someone from the devastating consequences of this disease.


Dr. Nelson has been a practicing hospitalist since 1988. He is co-founder and past president of SHM, and principal in Nelson Flores Hospital Medicine Consultants. He is co-director for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. Write to him at [email protected].

Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Article Type
Display Headline
John Nelson: Why Spinal Epidural Abcess Poses A Particular Liability Risk for Hospitalists
Display Headline
John Nelson: Why Spinal Epidural Abcess Poses A Particular Liability Risk for Hospitalists
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

New Anticoagulants Offer Promise, but Obstacles Remain

Article Type
Changed
Fri, 09/14/2018 - 12:20
Display Headline
New Anticoagulants Offer Promise, but Obstacles Remain

Dr. Hospitalist

I see more and more people taking one of the newer anticoagulants. I’ve also seen a few disasters with these drugs. What’s the story?

Stacy M. Harper, Green Bay, Wis.

Dr. Hospitalist responds:

Although warfarin (Coumadin) has been a mainstay anticoagulant for decades, it can often be a frustrating medicine to manage due to its myriad drug interactions and the constant need for therapeutic testing. Recently, we have seen new medications hit the market (with one more likely to be approved soon), each with its pros and cons. Here’s an overview:

  • Dabigatran (Pradaxa): It’s a direct thrombin inhibitor, taken twice daily. It has been approved for use in stroke prevention for atrial fibrillation (afib) (RELY trial) at 150 mg bid. It’s also been extensively studied for VTE prevention after orthopedic surgery, but it has not yet been approved in the U.S. for this indication.

Ask Dr. Hospitalist

Do you have a problem or concern that you’d like Dr. Hospitalist to address? Email your questions to [email protected].

As with all of these drugs, there is no reversal agent and there are no levels to measure. A recent report noted an increased risk of bleeding in patients who are older, have a low BMI, or have renal dysfunction. The manufacturer recommends a dose of 75 mg bid for patients with renal dysfunction, defined as a GFR of 15 to 30 mL/min; however, that dosing regimen was never explicitly studied.

Overall, it’s become quite a popular drug with the cardiologists in my neck of the woods. GERD can be a bothersome side effect. I avoid using it in patients older than 80, or in a patient with any renal dysfunction. Also, remember that it is not approved for VTE prevention or treatment.

  • Rivaroxaban (Xarelto): An oral factor Xa inhibitor. Usually taken once daily at 10 mg for VTE prevention (RECORD trials). It is dosed at 20 mg/day for stroke prevention in afib (ROCKET-AF trial). Just recently, it was approved by the FDA for use in the acute treatment of DVT and PE (EINSTEIN trial), dosed at 15 mg BID for the first 21 days, and then continued at 20 mg daily after the initial period (see “Game-Changer,” p. 41). It is more hepatically metabolized than dabigatran, but it still has a significant renal clearance component. When compared to lovenox in orthopedic patients, it’s as effective but with a slightly higher risk of bleeding. I would avoid using it in any patients with significant renal or hepatic dysfunction.
  • Apixaban (Eliquis): Another oral factor Xa inhibitor. Studied at 2.5 mg BID for VTE prevention in orthopedic patients (ADVANCE trials). Studied at 5 mg BID for stroke prevention in afib (ARISTOTLE trial). It is not yet approved in the U.S for any indication, but a final decision is expected from the FDA by March. Overall, the data are fairly compelling, and it looks like a strong candidate. The data show a drug that is potentially more effective than lovenox, with less risk of bleeding for orthopedic patients. It is mainly hepatically metabolized.

So, with no drug company relationships to disclose, here are my general observations: For starters, I think dabigatran is being overused in older patients with renal dysfunction. I seem to stop it more than I recommend it, and it is far from my favorite drug. With rivaroxaban, it looks appropriate for VTE prevention, and now having the option of being able to transition patients who develop a clot onto a treatment dose of the drug is appealing. Apixaban’s data look the best out of all three agents in terms of both efficacy and bleeding, and although it is yet to be approved here, I imagine that will change in the near future. For all of these drugs, remember that we have no long-term safety data, and no reversal agents. It will be interesting to see how this plays out and which of these drugs have staying power. For all of warfarin’s faults, at least we know how to measure it and how to stop it.

Issue
The Hospitalist - 2013(02)
Publications
Sections

Dr. Hospitalist

I see more and more people taking one of the newer anticoagulants. I’ve also seen a few disasters with these drugs. What’s the story?

Stacy M. Harper, Green Bay, Wis.

Dr. Hospitalist responds:

Although warfarin (Coumadin) has been a mainstay anticoagulant for decades, it can often be a frustrating medicine to manage due to its myriad drug interactions and the constant need for therapeutic testing. Recently, we have seen new medications hit the market (with one more likely to be approved soon), each with its pros and cons. Here’s an overview:

  • Dabigatran (Pradaxa): It’s a direct thrombin inhibitor, taken twice daily. It has been approved for use in stroke prevention for atrial fibrillation (afib) (RELY trial) at 150 mg bid. It’s also been extensively studied for VTE prevention after orthopedic surgery, but it has not yet been approved in the U.S. for this indication.

Ask Dr. Hospitalist

Do you have a problem or concern that you’d like Dr. Hospitalist to address? Email your questions to [email protected].

As with all of these drugs, there is no reversal agent and there are no levels to measure. A recent report noted an increased risk of bleeding in patients who are older, have a low BMI, or have renal dysfunction. The manufacturer recommends a dose of 75 mg bid for patients with renal dysfunction, defined as a GFR of 15 to 30 mL/min; however, that dosing regimen was never explicitly studied.

Overall, it’s become quite a popular drug with the cardiologists in my neck of the woods. GERD can be a bothersome side effect. I avoid using it in patients older than 80, or in a patient with any renal dysfunction. Also, remember that it is not approved for VTE prevention or treatment.

  • Rivaroxaban (Xarelto): An oral factor Xa inhibitor. Usually taken once daily at 10 mg for VTE prevention (RECORD trials). It is dosed at 20 mg/day for stroke prevention in afib (ROCKET-AF trial). Just recently, it was approved by the FDA for use in the acute treatment of DVT and PE (EINSTEIN trial), dosed at 15 mg BID for the first 21 days, and then continued at 20 mg daily after the initial period (see “Game-Changer,” p. 41). It is more hepatically metabolized than dabigatran, but it still has a significant renal clearance component. When compared to lovenox in orthopedic patients, it’s as effective but with a slightly higher risk of bleeding. I would avoid using it in any patients with significant renal or hepatic dysfunction.
  • Apixaban (Eliquis): Another oral factor Xa inhibitor. Studied at 2.5 mg BID for VTE prevention in orthopedic patients (ADVANCE trials). Studied at 5 mg BID for stroke prevention in afib (ARISTOTLE trial). It is not yet approved in the U.S for any indication, but a final decision is expected from the FDA by March. Overall, the data are fairly compelling, and it looks like a strong candidate. The data show a drug that is potentially more effective than lovenox, with less risk of bleeding for orthopedic patients. It is mainly hepatically metabolized.

So, with no drug company relationships to disclose, here are my general observations: For starters, I think dabigatran is being overused in older patients with renal dysfunction. I seem to stop it more than I recommend it, and it is far from my favorite drug. With rivaroxaban, it looks appropriate for VTE prevention, and now having the option of being able to transition patients who develop a clot onto a treatment dose of the drug is appealing. Apixaban’s data look the best out of all three agents in terms of both efficacy and bleeding, and although it is yet to be approved here, I imagine that will change in the near future. For all of these drugs, remember that we have no long-term safety data, and no reversal agents. It will be interesting to see how this plays out and which of these drugs have staying power. For all of warfarin’s faults, at least we know how to measure it and how to stop it.

Dr. Hospitalist

I see more and more people taking one of the newer anticoagulants. I’ve also seen a few disasters with these drugs. What’s the story?

Stacy M. Harper, Green Bay, Wis.

Dr. Hospitalist responds:

Although warfarin (Coumadin) has been a mainstay anticoagulant for decades, it can often be a frustrating medicine to manage due to its myriad drug interactions and the constant need for therapeutic testing. Recently, we have seen new medications hit the market (with one more likely to be approved soon), each with its pros and cons. Here’s an overview:

  • Dabigatran (Pradaxa): It’s a direct thrombin inhibitor, taken twice daily. It has been approved for use in stroke prevention for atrial fibrillation (afib) (RELY trial) at 150 mg bid. It’s also been extensively studied for VTE prevention after orthopedic surgery, but it has not yet been approved in the U.S. for this indication.

Ask Dr. Hospitalist

Do you have a problem or concern that you’d like Dr. Hospitalist to address? Email your questions to [email protected].

As with all of these drugs, there is no reversal agent and there are no levels to measure. A recent report noted an increased risk of bleeding in patients who are older, have a low BMI, or have renal dysfunction. The manufacturer recommends a dose of 75 mg bid for patients with renal dysfunction, defined as a GFR of 15 to 30 mL/min; however, that dosing regimen was never explicitly studied.

Overall, it’s become quite a popular drug with the cardiologists in my neck of the woods. GERD can be a bothersome side effect. I avoid using it in patients older than 80, or in a patient with any renal dysfunction. Also, remember that it is not approved for VTE prevention or treatment.

  • Rivaroxaban (Xarelto): An oral factor Xa inhibitor. Usually taken once daily at 10 mg for VTE prevention (RECORD trials). It is dosed at 20 mg/day for stroke prevention in afib (ROCKET-AF trial). Just recently, it was approved by the FDA for use in the acute treatment of DVT and PE (EINSTEIN trial), dosed at 15 mg BID for the first 21 days, and then continued at 20 mg daily after the initial period (see “Game-Changer,” p. 41). It is more hepatically metabolized than dabigatran, but it still has a significant renal clearance component. When compared to lovenox in orthopedic patients, it’s as effective but with a slightly higher risk of bleeding. I would avoid using it in any patients with significant renal or hepatic dysfunction.
  • Apixaban (Eliquis): Another oral factor Xa inhibitor. Studied at 2.5 mg BID for VTE prevention in orthopedic patients (ADVANCE trials). Studied at 5 mg BID for stroke prevention in afib (ARISTOTLE trial). It is not yet approved in the U.S for any indication, but a final decision is expected from the FDA by March. Overall, the data are fairly compelling, and it looks like a strong candidate. The data show a drug that is potentially more effective than lovenox, with less risk of bleeding for orthopedic patients. It is mainly hepatically metabolized.

So, with no drug company relationships to disclose, here are my general observations: For starters, I think dabigatran is being overused in older patients with renal dysfunction. I seem to stop it more than I recommend it, and it is far from my favorite drug. With rivaroxaban, it looks appropriate for VTE prevention, and now having the option of being able to transition patients who develop a clot onto a treatment dose of the drug is appealing. Apixaban’s data look the best out of all three agents in terms of both efficacy and bleeding, and although it is yet to be approved here, I imagine that will change in the near future. For all of these drugs, remember that we have no long-term safety data, and no reversal agents. It will be interesting to see how this plays out and which of these drugs have staying power. For all of warfarin’s faults, at least we know how to measure it and how to stop it.

Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Article Type
Display Headline
New Anticoagulants Offer Promise, but Obstacles Remain
Display Headline
New Anticoagulants Offer Promise, but Obstacles Remain
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Clinical Shorts

Article Type
Changed
Fri, 09/14/2018 - 12:20
Display Headline
Clinical Shorts

INFLUENZA VACCINE EFFECTIVENESS VARIES BY AGE

Case-control study of the 2010-2011 influenza vaccine found overall vaccine effectiveness to be 60%, ranging from 69% in those ages 3 to 8 to just 36% in those 65 or older.

Citation: Treanor JJ, Talbot HK, Ohmit SE, et al. Effectiveness of seasonal influenza vaccines in the United States during a season with circulation of all three vaccine strains. Clin Infect Dis. 2012;55:951-959.

NSAIDS INCREASE CV RISK AFTER MI, REGARDLESS OF LENGTH OF TIME

A nationwide cohort study in Denmark shows increased coronary risk with NSAID use for at least five years after first-time myocardial infarction.

Citation: Olsen AM, Fosbøl EL, Lindhardsen J, et al. Long-term cardiovascular risk of nonsteroidal anti-inflammatory drug use according to time passed after first-time myocardial infarction: a nationwide cohort study. Circulation. 2012;126:1955-1963.

PATIENTS WITH METASTATIC CANCER OFTEN OVERESTIMATE CHEMOTHERAPEUTIC EFFICACY

Survey of patients with metastatic solid tumors reveals significant misunderstanding regarding the curative potential of chemotherapy, and an inverse relationship between level of understanding and patients’ satisfaction with physician communication.

Citation: Weeks JC, Catalano PJ, Cronin A, et al. Patients’ expectations about effects of chemotherapy for advanced cancer. N Engl J Med. 2012;367:1616-1625.

RISKS ASSOCIATED WITH SYNTHETIC CANNABINOID ABUSE

This case series from the National Poison Data System indicates that adverse effects of synthetic cannabinoids are generally mild and self-limited, though rare reports of life-threatening seizures were identified.

Citation: Hoyte CO, Jacob J, Monte AA, Al-Jumaan M, Bronstein AC, Heard KJ. A characterization of synthetic cannabinoid exposures reported to the National Poison Data System in 2010. Ann Emerg Med. 2012;60:435-438.

FDA APPROVES FIRST SUBCUTANEOUS HEART DEFIBRILLATOR

Based on a multicenter study of 321 patients, the FDA approved the first subcutaneous heart defibrillator, which might be useful for patients in whom intravascular lead placement is problematic.

Citation: Bolek M. FDA approves first subcutaneous heart defibrillator. Food and Drug Administration website. Available at: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm321755.htm. Accessed Jan. 2, 2013.

ADD PCV13 TO THE LIST OF ADULT IMMUNIZATIONS

Thirteen-valent pneumococcal conjugate vaccine (PCV13) is now recommended in addition to 23-valent pneumococcal polysaccharide vaccine in adults 19 and older with immunocompromising conditions to decrease the risk of invasive pneumococcal disease.

Citation: Bennett NM, Whitney CG, Moore M, et al. Use of 13-valent pneumococcal conjugate vaccine and 23-valent pneumococcal polysaccharide vaccine for adults with immunocompromising conditions: recommendations of the advisory committee on immunization practices (ACIP). MMWR Morb Mortal Wkly Rep. 2012;61:816-819.

Issue
The Hospitalist - 2013(02)
Publications
Sections

INFLUENZA VACCINE EFFECTIVENESS VARIES BY AGE

Case-control study of the 2010-2011 influenza vaccine found overall vaccine effectiveness to be 60%, ranging from 69% in those ages 3 to 8 to just 36% in those 65 or older.

Citation: Treanor JJ, Talbot HK, Ohmit SE, et al. Effectiveness of seasonal influenza vaccines in the United States during a season with circulation of all three vaccine strains. Clin Infect Dis. 2012;55:951-959.

NSAIDS INCREASE CV RISK AFTER MI, REGARDLESS OF LENGTH OF TIME

A nationwide cohort study in Denmark shows increased coronary risk with NSAID use for at least five years after first-time myocardial infarction.

Citation: Olsen AM, Fosbøl EL, Lindhardsen J, et al. Long-term cardiovascular risk of nonsteroidal anti-inflammatory drug use according to time passed after first-time myocardial infarction: a nationwide cohort study. Circulation. 2012;126:1955-1963.

PATIENTS WITH METASTATIC CANCER OFTEN OVERESTIMATE CHEMOTHERAPEUTIC EFFICACY

Survey of patients with metastatic solid tumors reveals significant misunderstanding regarding the curative potential of chemotherapy, and an inverse relationship between level of understanding and patients’ satisfaction with physician communication.

Citation: Weeks JC, Catalano PJ, Cronin A, et al. Patients’ expectations about effects of chemotherapy for advanced cancer. N Engl J Med. 2012;367:1616-1625.

RISKS ASSOCIATED WITH SYNTHETIC CANNABINOID ABUSE

This case series from the National Poison Data System indicates that adverse effects of synthetic cannabinoids are generally mild and self-limited, though rare reports of life-threatening seizures were identified.

Citation: Hoyte CO, Jacob J, Monte AA, Al-Jumaan M, Bronstein AC, Heard KJ. A characterization of synthetic cannabinoid exposures reported to the National Poison Data System in 2010. Ann Emerg Med. 2012;60:435-438.

FDA APPROVES FIRST SUBCUTANEOUS HEART DEFIBRILLATOR

Based on a multicenter study of 321 patients, the FDA approved the first subcutaneous heart defibrillator, which might be useful for patients in whom intravascular lead placement is problematic.

Citation: Bolek M. FDA approves first subcutaneous heart defibrillator. Food and Drug Administration website. Available at: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm321755.htm. Accessed Jan. 2, 2013.

ADD PCV13 TO THE LIST OF ADULT IMMUNIZATIONS

Thirteen-valent pneumococcal conjugate vaccine (PCV13) is now recommended in addition to 23-valent pneumococcal polysaccharide vaccine in adults 19 and older with immunocompromising conditions to decrease the risk of invasive pneumococcal disease.

Citation: Bennett NM, Whitney CG, Moore M, et al. Use of 13-valent pneumococcal conjugate vaccine and 23-valent pneumococcal polysaccharide vaccine for adults with immunocompromising conditions: recommendations of the advisory committee on immunization practices (ACIP). MMWR Morb Mortal Wkly Rep. 2012;61:816-819.

INFLUENZA VACCINE EFFECTIVENESS VARIES BY AGE

Case-control study of the 2010-2011 influenza vaccine found overall vaccine effectiveness to be 60%, ranging from 69% in those ages 3 to 8 to just 36% in those 65 or older.

Citation: Treanor JJ, Talbot HK, Ohmit SE, et al. Effectiveness of seasonal influenza vaccines in the United States during a season with circulation of all three vaccine strains. Clin Infect Dis. 2012;55:951-959.

NSAIDS INCREASE CV RISK AFTER MI, REGARDLESS OF LENGTH OF TIME

A nationwide cohort study in Denmark shows increased coronary risk with NSAID use for at least five years after first-time myocardial infarction.

Citation: Olsen AM, Fosbøl EL, Lindhardsen J, et al. Long-term cardiovascular risk of nonsteroidal anti-inflammatory drug use according to time passed after first-time myocardial infarction: a nationwide cohort study. Circulation. 2012;126:1955-1963.

PATIENTS WITH METASTATIC CANCER OFTEN OVERESTIMATE CHEMOTHERAPEUTIC EFFICACY

Survey of patients with metastatic solid tumors reveals significant misunderstanding regarding the curative potential of chemotherapy, and an inverse relationship between level of understanding and patients’ satisfaction with physician communication.

Citation: Weeks JC, Catalano PJ, Cronin A, et al. Patients’ expectations about effects of chemotherapy for advanced cancer. N Engl J Med. 2012;367:1616-1625.

RISKS ASSOCIATED WITH SYNTHETIC CANNABINOID ABUSE

This case series from the National Poison Data System indicates that adverse effects of synthetic cannabinoids are generally mild and self-limited, though rare reports of life-threatening seizures were identified.

Citation: Hoyte CO, Jacob J, Monte AA, Al-Jumaan M, Bronstein AC, Heard KJ. A characterization of synthetic cannabinoid exposures reported to the National Poison Data System in 2010. Ann Emerg Med. 2012;60:435-438.

FDA APPROVES FIRST SUBCUTANEOUS HEART DEFIBRILLATOR

Based on a multicenter study of 321 patients, the FDA approved the first subcutaneous heart defibrillator, which might be useful for patients in whom intravascular lead placement is problematic.

Citation: Bolek M. FDA approves first subcutaneous heart defibrillator. Food and Drug Administration website. Available at: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm321755.htm. Accessed Jan. 2, 2013.

ADD PCV13 TO THE LIST OF ADULT IMMUNIZATIONS

Thirteen-valent pneumococcal conjugate vaccine (PCV13) is now recommended in addition to 23-valent pneumococcal polysaccharide vaccine in adults 19 and older with immunocompromising conditions to decrease the risk of invasive pneumococcal disease.

Citation: Bennett NM, Whitney CG, Moore M, et al. Use of 13-valent pneumococcal conjugate vaccine and 23-valent pneumococcal polysaccharide vaccine for adults with immunocompromising conditions: recommendations of the advisory committee on immunization practices (ACIP). MMWR Morb Mortal Wkly Rep. 2012;61:816-819.

Issue
The Hospitalist - 2013(02)
Issue
The Hospitalist - 2013(02)
Publications
Publications
Article Type
Display Headline
Clinical Shorts
Display Headline
Clinical Shorts
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)