Why Residents Order Unnecessary Inpatient Laboratory Tests

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Residents' self‐report on why they order perceived unnecessary inpatient laboratory tests

Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.

METHODS

In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.

Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.

RESULTS

The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).

Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
  • NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.

Reported he or she orders unnecessary routine labs, no. (%) 96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily 47 (49.0)
23 times/week 44 (45.8)
1 time/week or less 5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs 105 (90.5)
Lack of cost transparency of labs 100 (86.2)
Discomfort with diagnostic uncertainty 96 (82.8)
Concern that the attending will ask for the data and I will not have it 88 (75.9)
Lack of role modeling of cost conscious care 78 (67.2)
Lack of cost conscious culture at our institution 76 (65.5)
Lack of experience 72 (62.1)
Ease of ordering repeating labs in EHR 60 (51.7)
Fear of litigation from missed diagnosis related to lab data 44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories* Representative Quotes IM, n = 76, No. (%) GS, n = 21, No. (%)
  • NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.

Cost transparency Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship. 31 (40.8) 7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day. 23 (30.2) 7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider. 16 (21.1) 6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue. 12 (15.8) 4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements Make it easier to get labs later [in the day] 6 (7.9) 2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.

DISCUSSION

A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]

Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.

Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.

Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.

This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.

In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.

Acknowledgements

The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.

Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.

Files
References
  1. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med. 2011;86(1):139145.
  2. Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One. 2013;8(11):e78962.
  3. Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):16461653.
  4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486492.
  5. Korenstein D. Charting the route to high‐value care the role of medical education. JAMA. 2016;314(22):23592361.
  6. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):23852393.
  7. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174(10):16401648.
  8. Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med. 2015;30(9):12861293.
  9. Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med. 2015;30(9):12941298.
  10. Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods. 1967;5:110.
  11. Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA. 2012;308(13):13291330.
  12. Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med. 2016;11(4):297302.
  13. Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med. 2015;30(6):835842.
  14. Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med. 2016;11(1):6576.
  15. Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA. 2015;314(22):23492350.
Article PDF
Issue
Journal of Hospital Medicine - 11(12)
Page Number
869-872
Sections
Files
Files
Article PDF
Article PDF

Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.

METHODS

In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.

Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.

RESULTS

The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).

Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
  • NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.

Reported he or she orders unnecessary routine labs, no. (%) 96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily 47 (49.0)
23 times/week 44 (45.8)
1 time/week or less 5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs 105 (90.5)
Lack of cost transparency of labs 100 (86.2)
Discomfort with diagnostic uncertainty 96 (82.8)
Concern that the attending will ask for the data and I will not have it 88 (75.9)
Lack of role modeling of cost conscious care 78 (67.2)
Lack of cost conscious culture at our institution 76 (65.5)
Lack of experience 72 (62.1)
Ease of ordering repeating labs in EHR 60 (51.7)
Fear of litigation from missed diagnosis related to lab data 44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories* Representative Quotes IM, n = 76, No. (%) GS, n = 21, No. (%)
  • NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.

Cost transparency Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship. 31 (40.8) 7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day. 23 (30.2) 7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider. 16 (21.1) 6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue. 12 (15.8) 4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements Make it easier to get labs later [in the day] 6 (7.9) 2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.

DISCUSSION

A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]

Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.

Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.

Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.

This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.

In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.

Acknowledgements

The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.

Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.

Resident physicians routinely order inpatient laboratory tests,[1] and there is evidence to suggest that many of these tests are unnecessary[2] and potentially harmful.[3] The Society of Hospital Medicine has identified reducing the unnecessary ordering of inpatient laboratory testing as part of the Choosing Wisely campaign.[4] Hospitalists at academic medical centers face growing pressures to develop processes to reduce low‐value care and train residents to be stewards of healthcare resources.[5] Studies[6, 7, 8, 9] have described that institutional and training factors drive residents' resource utilization patterns, but, to our knowledge, none have described what factors contribute to residents' unnecessary laboratory testing. To better understand the factors associated with residents' ordering patterns, we conducted a qualitative analysis of internal medicine (IM) and general surgery (GS) residents at a large academic medical center in order to describe residents' perception of the: (1) frequency of ordering unnecessary inpatient laboratory tests, (2) factors contributing to that behavior, and (3) potential interventions to change it. We also explored differences in responses by specialty and training level.

METHODS

In October 2014, we surveyed all IM and GS residents at the Hospital of the University of Pennsylvania. We reviewed the literature and conducted focus groups with residents to formulate items for the survey instrument. A draft of the survey was administered to 8 residents from both specialties, and their feedback was collated and incorporated into the final version of the instrument. The final 15‐question survey was comprised of 4 components: (1) training information such as specialty and postgraduate year (PGY), (2) self‐reported frequency of perceived unnecessary ordering of inpatient laboratory tests, (3) perception of factors contributing to unnecessary ordering, and (4) potential interventions to reduce unnecessary ordering. An unnecessary test was defined as a test that would not change management regardless of its result. To increase response rates, participants were entered into drawings for $5 gift cards, a $200 air travel voucher, and an iPad mini.

Descriptive statistics and 2tests were conducted with Stata version 13 (StataCorp LP, College Station, TX) to explore differences in the frequency of responses by specialty and training level. To identify themes that emerged from free‐text responses, two independent reviewers (M.S.S. and E.J.K.) performed qualitative content analysis using grounded theory.[10] Reviewers read 10% of responses to create a coding guide. Another 10% of the responses were randomly selected to assess inter‐rater reliability by calculating scores. The reviewers independently coded the remaining 80% of responses. Discrepancies were adjudicated by consensus between the reviewers. The University of Pennsylvania Institutional Review Board deemed this study exempt from review.

RESULTS

The sample comprised 57.0% (85/149) of IM and 54.4% (31/57) of GS residents (Table 1). Among respondents, perceived unnecessary inpatient laboratory test ordering was self‐reported by 88.2% of IM and 67.7% of GS residents. This behavior was reported to occur on a daily basis by 43.5% and 32.3% of responding IM and GS residents, respectively. Across both specialties, the most commonly reported factors contributing to these behaviors were learned practice habit/routine (90.5%), a lack of understanding of the costs associated with lab tests (86.2%), diagnostic uncertainty (82.8%), and fear of not having the lab result information when requested by an attending (75.9%). There were no significant differences in any of these contributing factors by specialty or PGY level. Among respondents who completed a free‐text response (IM: 76 of 85; GS: 21 of 31), the most commonly proposed interventions to address these issues were increasing cost transparency (IM 40.8%; GS 33.3%), improvements to faculty role modeling (IM 30.2%; GS 33.3%), and computerized reminders or decision support (IM 21.1%; GS 28.6%) (Table 2).

Residents' Self‐Reported Frequency of and Factors Contributing to Perceived Unnecessary Inpatient Laboratory Ordering
Residents (n = 116)*
  • NOTE: Abbreviations: EHR, electronic health record. *There were 116 responses out of 206 eligible residents, among whom 57.0% (85/149) were IM and 54.4% (31/57) were GS residents. Among the IM respondents, 36 were PGY‐1 interns, and among the GS respondents, 12 were PGY‐1 interns. There were no differences in response across specialty and PGY level. Respondents were asked, Please rate your level of agreement with whether the following items contribute to unnecessary ordering on a 5‐point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Agreement included survey participants who agreed and/or strongly agreed with the statement.

Reported he or she orders unnecessary routine labs, no. (%) 96 (82.8)
Frequency of ordering unnecessary labs, no. (%)
Daily 47 (49.0)
23 times/week 44 (45.8)
1 time/week or less 5 (5.2)
Agreement with statement as factors contributing to ordering unnecessary labs, no. (%)
Practice habit; I am trained to order repeating daily labs 105 (90.5)
Lack of cost transparency of labs 100 (86.2)
Discomfort with diagnostic uncertainty 96 (82.8)
Concern that the attending will ask for the data and I will not have it 88 (75.9)
Lack of role modeling of cost conscious care 78 (67.2)
Lack of cost conscious culture at our institution 76 (65.5)
Lack of experience 72 (62.1)
Ease of ordering repeating labs in EHR 60 (51.7)
Fear of litigation from missed diagnosis related to lab data 44 (37.9)
Residents' Suggestions for Possible Solutions to Unnecessary Ordering
Categories* Representative Quotes IM, n = 76, No. (%) GS, n = 21, No. (%)
  • NOTE: Abbreviations: coags, coagulation tests; EHR, electronic health record; IM, internal medicine; GS, general surgery; LFT, liver function tests. *Kappa scores: mean 0.78; range, 0.591. Responses could be assigned to multiple categories. There were 85 of 149 (57.0%) IM respondents, among whom 76 of 85 (89.4%) provided a free‐text suggestion. There were 31 of 57 (54.4%) GS respondents, among whom 21 of 31 (67.7%) provided a free‐text suggestion.

Cost transparency Let us know the costs of what we order and train us to remember that a patient gets a bill and we are contributing to a possible bankruptcy or hardship. 31 (40.8) 7 (33.3)
Display the cost of labs when [we're] ordering them [in the EHR].
Post the prices so that MDs understand how much everything costs.
Role modeling restrain Train attendings to be more critical about necessity of labs and overordering. Make it part of rounding practice to decide on the labs truly needed for each patient the next day. 23 (30.2) 7 (33.3)
Attendings could review daily lab orders and briefly explain which they do not believe we need. This would allow residents to learn from their experience and their thought processes.
Encouragement and modeling of this practice from the faculty perhaps by laying out more clear expectations for which clinical situations warrant daily labs and which do not.
Computerized or decision support When someone orders labs and the previous day's lab was normal or labs were stable for 2 days, an alert should pop up to reconsider. 16 (21.1) 6 (28.6)
Prevent us from being able to order repeating [or standing] labs.
Track how many times labs changed management, and restrict certain labslike LFTs/coags.
High‐value care educational curricula Increase awareness of issue by having a noon conference about it or some other forum for residents to discuss the issue. 12 (15.8) 4 (19.0)
Establish guidelines for housestaff to learn/follow from start of residency.
Integrate cost conscious care into training program curricula.
System improvements Make it easier to get labs later [in the day] 6 (7.9) 2 (9.5)
Improve timeliness of phlebotomy/laboratory systems.
More responsive phlebotomy.

DISCUSSION

A significant portion of inpatient laboratory testing is unnecessary,[2] creating an opportunity to reduce utilization and associated costs. Our findings demonstrate that these behaviors occur at high levels among residents (IM 88.2%; GS 67.7%) at a large academic medical center. These findings also reveal that residents attribute this behavior to practice habit, lack of access to cost data, and perceived expectations for daily lab ordering by faculty. Interventions to change these behaviors will need to involve changes to the health system culture, increasing transparency of the costs associated with healthcare services, and shifting to a model of education that celebrates restraint.[11]

Our study adds to the emerging quest for delivering value in healthcare and provides several important insights for hospitalists and medical educators at academic centers. First, our findings reflect the significant role that the clinical learning environment plays in influencing practice behaviors among residents. Residency training is a critical time when physicians begin to form habits that imprint upon their future practice patterns,[5] and our residents are aware that their behavior to order what they perceive to be unnecessary laboratory tests is driven by habit. Studies[6, 7] have shown that residents may implicitly accept certain styles of practice as correct and are more likely to adopt those styles during the early years of their training. In our institution, for example, the process of ordering standing or daily morning labs using a repeated copy‐forward function in the electronic health record is a common, learned practice (a ritual) that is passed down from senior to junior residents year after year. This practice is common across both training specialties. There is a need to better understand, measure, and change the culture in the clinical learning environment to demonstrate practices and values that model high‐value care for residents. Multipronged interventions that address culture, oversight, and systems change[12] are necessary to facilitate effective physician stewardship of inpatient laboratory testing and attack a problem so deeply ingrained in habit.

Second, residents in our study believe that access to cost information will better equip them to reduce unnecessary lab ordering. Two recent systematic reviews[13, 14] have shown that having real‐time access to charges changes physician ordering and prescribing behavior. Increasing cost transparency may not only be an important intervention for hospitals to reduce overuse and control cost, but also better arm resident physicians with the information they need to make higher‐value recommendations for their patients and be stewards of healthcare resources.

Third, our study highlights that residents' unnecessary laboratory utilization is driven by perceived, unspoken expectations for such ordering by faculty. This reflects an important undercurrent in the medical education system that has historically emphasized and rewarded thoroughness while often penalizing restraint.[11] Hospitalists can play a major role in changing these behaviors by sharing their expectations regarding test ordering at the beginning of teaching rotations, including teaching points that discourage overutilization during rounds, and role modeling high‐value care in their own practice. Taken together and practiced routinely, these hospitalist behaviors could prevent poor habits from forming in our trainees and discourage overinvestigation. Hospitalists must be responsible to disseminate the practice of restraint to achieve more cost‐effective care. Purposeful faculty development efforts in the area of high‐value care are needed. Additionally, supporting physician leaders that serve as the institutional bridge between graduate medical education and the health system[15] could foster an environment conducive to coaching residents and faculty to reduce unnecessary practice variation.

This study is subject to several limitations. First, the survey was conducted at a single academic medical center, with a modest response rate, and thus our findings may not be generalizable to other settings or residents of different training programs. Second, we did not validate residents' perception of whether or not tests were, in fact, unnecessary. We also did not validate residents' self‐reporting of their own behavior, which may vary from actual behavior. Lack of validation at the level of the tests and at the level of the residents' behavior are two distinct but inter‐related limitations. Although self‐reported responses among residents are an important indicator of their practice, validating these data with objective measures, such as a measure of necessity of ordered lab tests as determined by an expert physician or group of experienced physicians or the number of inpatient labs ordered by residents, may add further insights. Ordering of perceived unnecessary tests may be even more common if there was under‐reporting of this behavior. Third, although we provided a definition within the survey, interpretation among survey respondents of the term unnecessary may vary, and this variation may contribute to our findings. However, we did provide a clear definition in the survey and we attempted to mitigate this with feedback from residents on our preliminary pilot.

In conclusion, this is one of the first qualitative evaluations to explore residents' perceptions on why they order unnecessary inpatient laboratory tests. Our findings offer a rich understanding of residents' beliefs about their own role in unnecessary lab ordering and explore possible solutions through the lens of the resident. Yet, it is unclear whether tests deemed unnecessary by residents would also be considered unnecessary by attending physicians or even patients. Future efforts are needed to better define which inpatient tests are unnecessary from multiple perspectives including clinicians and patients.

Acknowledgements

The authors thank Patrick J. Brennan, MD, Jeffery S. Berns, MD, Lisa M. Bellini, MD, Jon B. Morris, MD, and Irving Nachamkin, DrPH, MPH, all from the Hospital of the University of Pennsylvania, for supporting this work. They received no compensation.

Disclosures: This work was presented in part at the AAMC Integrating Quality Meeting, June 11, 2015, Chicago, Illinois; and the Alliance for Academic Internal Medicine Fall Meeting, October 9, 2015, Atlanta, Georgia. The authors report no conflicts of interest.

References
  1. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med. 2011;86(1):139145.
  2. Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One. 2013;8(11):e78962.
  3. Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):16461653.
  4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486492.
  5. Korenstein D. Charting the route to high‐value care the role of medical education. JAMA. 2016;314(22):23592361.
  6. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):23852393.
  7. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174(10):16401648.
  8. Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med. 2015;30(9):12861293.
  9. Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med. 2015;30(9):12941298.
  10. Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods. 1967;5:110.
  11. Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA. 2012;308(13):13291330.
  12. Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med. 2016;11(4):297302.
  13. Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med. 2015;30(6):835842.
  14. Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med. 2016;11(1):6576.
  15. Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA. 2015;314(22):23492350.
References
  1. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university's hospitalist service. Acad Med. 2011;86(1):139145.
  2. Zhi M, Ding EL, Theisen‐Toupal J, Whelan J, Arnaout R. The landscape of inappropriate laboratory testing: a 15‐year meta‐analysis. PLoS One. 2013;8(11):e78962.
  3. Salisbury A, Reid K, Alexander K, et al. Diagnostic blood loss from phlebotomy and hospital‐acquired anemia during acute myocardial infarction. Arch Intern Med. 2011;171(18):16461653.
  4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486492.
  5. Korenstein D. Charting the route to high‐value care the role of medical education. JAMA. 2016;314(22):23592361.
  6. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):23852393.
  7. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174(10):16401648.
  8. Ryskina KL, Dine CJ, Kim EJ, Bishop TF, Epstein AJ. Effect of attending practice style on generic medication prescribing by residents in the clinic setting: an observational study. J Gen Intern Med. 2015;30(9):12861293.
  9. Patel MS, Reed DA, Smith C, Arora VM. Role‐modeling cost‐conscious care—a national evaluation of perceptions of faculty at teaching hospitals in the United States. J Gen Intern Med. 2015;30(9):12941298.
  10. Glaser BG, Strauss AL. The discovery of grounded theory. Int J Qual Methods. 1967;5:110.
  11. Detsky AC, Verma AA. A new model for medical education: celebrating restraint. JAMA. 2012;308(13):13291330.
  12. Moriates C, Shah NT, Arora VM. A framework for the frontline: how hospitalists can improve healthcare value. J Hosp Med. 2016;11(4):297302.
  13. Goetz C, Rotman SR, Hartoularos G, Bishop TF. The effect of charge display on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med. 2015;30(6):835842.
  14. Silvestri MT, Bongiovanni TR, Glover JG, Gross CP. Impact of price display on provider ordering: a systematic review. J Hosp Med. 2016;11(1):6576.
  15. Gupta R, Arora VM. Merging the health system and education silos to better educate future physicians. JAMA. 2015;314(22):23492350.
Issue
Journal of Hospital Medicine - 11(12)
Issue
Journal of Hospital Medicine - 11(12)
Page Number
869-872
Page Number
869-872
Article Type
Display Headline
Residents' self‐report on why they order perceived unnecessary inpatient laboratory tests
Display Headline
Residents' self‐report on why they order perceived unnecessary inpatient laboratory tests
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Mina S. Sedrak, MD, MS, 1500 E. Duarte Road, Duarte, CA 91010; Telephone: 626‐471‐9200; Fax: 626‐301‐8233; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Comportment and Communication Score

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Developing a comportment and communication tool for use in hospital medicine

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
Figure 1
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

Files
References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
Article PDF
Issue
Journal of Hospital Medicine - 11(12)
Page Number
853-858
Sections
Files
Files
Article PDF
Article PDF

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
Figure 1
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
Figure 1
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
Issue
Journal of Hospital Medicine - 11(12)
Issue
Journal of Hospital Medicine - 11(12)
Page Number
853-858
Page Number
853-858
Article Type
Display Headline
Developing a comportment and communication tool for use in hospital medicine
Display Headline
Developing a comportment and communication tool for use in hospital medicine
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Susrutha Kotwal, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 200 Eastern Avenue, MFL Building West Tower, 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

How Common is Coexisting Epilepsy/PNES?

Article Type
Changed
Thu, 12/15/2022 - 16:00
Display Headline
How Common is Coexisting Epilepsy/PNES?
Prevalence data that may help address diagnostic and therapeutic challenges

Researchers examined 1567 patient medical records from the Vanderbilt University Medical Center Adult EMU and found a 5.2% prevalence rate of coexisting epilepsy/psychogenic nonepileptic spells (PNES). Other findings include:

·         Epileptic seizures were preceded by a PNES event in 94.4% of epilepsy/PNES patients

·         Patients with epilepsy/PNES had a higher presence of epilepsy risk factors

·         Abnormal brain MRI and abnormal neurological examination were more common in the epilepsy/PNES group. 

Chen-Block S, Abou-Khalil BW, Arain A, et al. Video-EEG results and clinical characteristics in patients with psychogenic nonepileptic spells: the effect of a coexistent epilepsy. Epilepsy Behav. 2016;62:62-65.

References

Author and Disclosure Information

Publications
Sections
Author and Disclosure Information

Author and Disclosure Information

Prevalence data that may help address diagnostic and therapeutic challenges
Prevalence data that may help address diagnostic and therapeutic challenges

Researchers examined 1567 patient medical records from the Vanderbilt University Medical Center Adult EMU and found a 5.2% prevalence rate of coexisting epilepsy/psychogenic nonepileptic spells (PNES). Other findings include:

·         Epileptic seizures were preceded by a PNES event in 94.4% of epilepsy/PNES patients

·         Patients with epilepsy/PNES had a higher presence of epilepsy risk factors

·         Abnormal brain MRI and abnormal neurological examination were more common in the epilepsy/PNES group. 

Chen-Block S, Abou-Khalil BW, Arain A, et al. Video-EEG results and clinical characteristics in patients with psychogenic nonepileptic spells: the effect of a coexistent epilepsy. Epilepsy Behav. 2016;62:62-65.

Researchers examined 1567 patient medical records from the Vanderbilt University Medical Center Adult EMU and found a 5.2% prevalence rate of coexisting epilepsy/psychogenic nonepileptic spells (PNES). Other findings include:

·         Epileptic seizures were preceded by a PNES event in 94.4% of epilepsy/PNES patients

·         Patients with epilepsy/PNES had a higher presence of epilepsy risk factors

·         Abnormal brain MRI and abnormal neurological examination were more common in the epilepsy/PNES group. 

Chen-Block S, Abou-Khalil BW, Arain A, et al. Video-EEG results and clinical characteristics in patients with psychogenic nonepileptic spells: the effect of a coexistent epilepsy. Epilepsy Behav. 2016;62:62-65.

References

References

Publications
Publications
Article Type
Display Headline
How Common is Coexisting Epilepsy/PNES?
Display Headline
How Common is Coexisting Epilepsy/PNES?
Sections
Article Source

PURLs Copyright

Inside the Article

Disallow All Ads

Hospitalized Patients With Epilepsy at Risk for Specific Safety-Related Adverse Events

Article Type
Changed
Thu, 12/15/2022 - 16:00
Display Headline
Hospitalized Patients With Epilepsy at Risk for Specific Safety-Related Adverse Events
Hip fracture and respiratory failure among the AEs associated with hospitalized epilepsy patients

People with epilepsy are at an increased risk of specific safety-related adverse events while in the hospital. Researchers found that hospitalized patients with epilepsy were at a greater risk for fall with hip fracture, respiratory failure, sepsis, and preventable postoperative death. The authors also reported that adverse events were associated with a prolonged length of stay, as well as an increase in the odds of inpatient death and an increase in high-level post-acute care.

Mendizabal A, Thibault DP, Willis AW. Patient safety events in hospital care of individuals with epilepsy [published online ahead of print June 28, 2016]. Epilepsia. 2016;doi:10.1111/epi.13440.

Publications
Sections
Hip fracture and respiratory failure among the AEs associated with hospitalized epilepsy patients
Hip fracture and respiratory failure among the AEs associated with hospitalized epilepsy patients

People with epilepsy are at an increased risk of specific safety-related adverse events while in the hospital. Researchers found that hospitalized patients with epilepsy were at a greater risk for fall with hip fracture, respiratory failure, sepsis, and preventable postoperative death. The authors also reported that adverse events were associated with a prolonged length of stay, as well as an increase in the odds of inpatient death and an increase in high-level post-acute care.

Mendizabal A, Thibault DP, Willis AW. Patient safety events in hospital care of individuals with epilepsy [published online ahead of print June 28, 2016]. Epilepsia. 2016;doi:10.1111/epi.13440.

People with epilepsy are at an increased risk of specific safety-related adverse events while in the hospital. Researchers found that hospitalized patients with epilepsy were at a greater risk for fall with hip fracture, respiratory failure, sepsis, and preventable postoperative death. The authors also reported that adverse events were associated with a prolonged length of stay, as well as an increase in the odds of inpatient death and an increase in high-level post-acute care.

Mendizabal A, Thibault DP, Willis AW. Patient safety events in hospital care of individuals with epilepsy [published online ahead of print June 28, 2016]. Epilepsia. 2016;doi:10.1111/epi.13440.

Publications
Publications
Article Type
Display Headline
Hospitalized Patients With Epilepsy at Risk for Specific Safety-Related Adverse Events
Display Headline
Hospitalized Patients With Epilepsy at Risk for Specific Safety-Related Adverse Events
Sections
Disallow All Ads
Alternative CME

Systemic Disease Manifestations of TSC Strongly Associated With Epilepsy

Article Type
Changed
Thu, 12/15/2022 - 16:00
Display Headline
Systemic Disease Manifestations of TSC Strongly Associated With Epilepsy
Study examines the relationship between tuberous sclerosis complex and epilepsy

In a study of 1816 patients with tuberous sclerosis complex (TSC), researchers found that specific disease manifestations—cardiac rhabodmyomas, retinal hemartomas, renal cysts, renal angiomyolipipomas, and facial angiofibromas—were associated with a higher likelihood of epilepsy development. The authors posit that this research can help identify patients who will benefit from novel, targeted, preventative treatments.

Jeong A, Wong M. Systemic disease manifestations associated with epilepsy in tuberous sclerosis complex [published online ahead of print July 15, 2016]. Epilepsia. 2016;doi:10.1111/epi.13467. 

References

Author and Disclosure Information

Publications
Sections
Author and Disclosure Information

Author and Disclosure Information

Study examines the relationship between tuberous sclerosis complex and epilepsy
Study examines the relationship between tuberous sclerosis complex and epilepsy

In a study of 1816 patients with tuberous sclerosis complex (TSC), researchers found that specific disease manifestations—cardiac rhabodmyomas, retinal hemartomas, renal cysts, renal angiomyolipipomas, and facial angiofibromas—were associated with a higher likelihood of epilepsy development. The authors posit that this research can help identify patients who will benefit from novel, targeted, preventative treatments.

Jeong A, Wong M. Systemic disease manifestations associated with epilepsy in tuberous sclerosis complex [published online ahead of print July 15, 2016]. Epilepsia. 2016;doi:10.1111/epi.13467. 

In a study of 1816 patients with tuberous sclerosis complex (TSC), researchers found that specific disease manifestations—cardiac rhabodmyomas, retinal hemartomas, renal cysts, renal angiomyolipipomas, and facial angiofibromas—were associated with a higher likelihood of epilepsy development. The authors posit that this research can help identify patients who will benefit from novel, targeted, preventative treatments.

Jeong A, Wong M. Systemic disease manifestations associated with epilepsy in tuberous sclerosis complex [published online ahead of print July 15, 2016]. Epilepsia. 2016;doi:10.1111/epi.13467. 

References

References

Publications
Publications
Article Type
Display Headline
Systemic Disease Manifestations of TSC Strongly Associated With Epilepsy
Display Headline
Systemic Disease Manifestations of TSC Strongly Associated With Epilepsy
Sections
Article Source

PURLs Copyright

Inside the Article

Disallow All Ads

A Second Look at Head MRIs Demonstrates the Value of Re-Review

Article Type
Changed
Thu, 12/15/2022 - 16:00
Display Headline
A Second Look at Head MRIs Demonstrates the Value of Re-Review
Presurgical conferences found MRI findings that did not warrant resection.

To determine if patients with epilepsy are appropriate candidates for resective surgery, presurgical conferences are conducted to review magnetic resonance images (MRIs) of the patient’s head. Kenney and associates analyzed repeat reviews of MRIs at presurgical epilepsy conferences to assess their impact on the decision-making process. Among the 233 patients whose charts were re-reviewed, 94 patients (40.3%) had the resective surgery performed, and the analysis revealed that 41 patients (17.6%) had previously undiagnosed findings; 18 of the 41 patients had the surgery. However, among 4 of the 41 patients (9.8%), the re-reviews found abnormalities that did not warrant surgical resection, including autoimmunity and bilateral pathology.  

Kenney DL, Kelly-Williams KM, Krecke KN et al. Usefulness of Repeat Review of Head Magnetic Resonance Images During Presurgical Epilepsy Conferences. Epilepsy Res. 2016. In press.  http://dx.doi.org/10.1016/j.eplepsyres.2016.06.005.

Publications
Sections
Presurgical conferences found MRI findings that did not warrant resection.
Presurgical conferences found MRI findings that did not warrant resection.

To determine if patients with epilepsy are appropriate candidates for resective surgery, presurgical conferences are conducted to review magnetic resonance images (MRIs) of the patient’s head. Kenney and associates analyzed repeat reviews of MRIs at presurgical epilepsy conferences to assess their impact on the decision-making process. Among the 233 patients whose charts were re-reviewed, 94 patients (40.3%) had the resective surgery performed, and the analysis revealed that 41 patients (17.6%) had previously undiagnosed findings; 18 of the 41 patients had the surgery. However, among 4 of the 41 patients (9.8%), the re-reviews found abnormalities that did not warrant surgical resection, including autoimmunity and bilateral pathology.  

Kenney DL, Kelly-Williams KM, Krecke KN et al. Usefulness of Repeat Review of Head Magnetic Resonance Images During Presurgical Epilepsy Conferences. Epilepsy Res. 2016. In press.  http://dx.doi.org/10.1016/j.eplepsyres.2016.06.005.

To determine if patients with epilepsy are appropriate candidates for resective surgery, presurgical conferences are conducted to review magnetic resonance images (MRIs) of the patient’s head. Kenney and associates analyzed repeat reviews of MRIs at presurgical epilepsy conferences to assess their impact on the decision-making process. Among the 233 patients whose charts were re-reviewed, 94 patients (40.3%) had the resective surgery performed, and the analysis revealed that 41 patients (17.6%) had previously undiagnosed findings; 18 of the 41 patients had the surgery. However, among 4 of the 41 patients (9.8%), the re-reviews found abnormalities that did not warrant surgical resection, including autoimmunity and bilateral pathology.  

Kenney DL, Kelly-Williams KM, Krecke KN et al. Usefulness of Repeat Review of Head Magnetic Resonance Images During Presurgical Epilepsy Conferences. Epilepsy Res. 2016. In press.  http://dx.doi.org/10.1016/j.eplepsyres.2016.06.005.

Publications
Publications
Article Type
Display Headline
A Second Look at Head MRIs Demonstrates the Value of Re-Review
Display Headline
A Second Look at Head MRIs Demonstrates the Value of Re-Review
Sections
Disallow All Ads
Alternative CME

Stimulation-identified Cortical Naming Sites Pose Unexpected Challenges

Article Type
Changed
Thu, 12/15/2022 - 16:00
Display Headline
Stimulation-identified Cortical Naming Sites Pose Unexpected Challenges
Because word production may involve more than one region of the brain, presurgical electrical stimulation mapping may be misleading

Before surgeons perform a resection involving the language-dominant hemisphere of a patient with epilepsy, they may do electrical stimulation mapping to identify a patient’s language-dominant hemisphere. Typically they will ask patients to identify objects to help locate the language cortex and then avoid resection in an area of the brain in which electrical stimulation makes it difficult for patients to name said objects. But because word production involves mechanisms that may be centered in more than one area of the brain, Hamberger et al tested locations that have been identified by stimulation as naming sites to look for disparities. Testing patients with refractory temporal lobe epilepsy who had subdural electrodes implanted, they discovered that stimulating naming sites in the superior temporary lobe was more likely to disrupt phonological processing but did not affect a patient’s ability to process semantic information. Stimulating the inferior temporal naming sites was more likely to impair semantic processing. 

Hamberger MJ, Miozzo M, Schevon CA, et al. Functional differences among stimulation-identified cortical naming sites in the temporal region. Epilepsy Behav. 2016;60:124-129. 

Publications
Sections
Because word production may involve more than one region of the brain, presurgical electrical stimulation mapping may be misleading
Because word production may involve more than one region of the brain, presurgical electrical stimulation mapping may be misleading

Before surgeons perform a resection involving the language-dominant hemisphere of a patient with epilepsy, they may do electrical stimulation mapping to identify a patient’s language-dominant hemisphere. Typically they will ask patients to identify objects to help locate the language cortex and then avoid resection in an area of the brain in which electrical stimulation makes it difficult for patients to name said objects. But because word production involves mechanisms that may be centered in more than one area of the brain, Hamberger et al tested locations that have been identified by stimulation as naming sites to look for disparities. Testing patients with refractory temporal lobe epilepsy who had subdural electrodes implanted, they discovered that stimulating naming sites in the superior temporary lobe was more likely to disrupt phonological processing but did not affect a patient’s ability to process semantic information. Stimulating the inferior temporal naming sites was more likely to impair semantic processing. 

Hamberger MJ, Miozzo M, Schevon CA, et al. Functional differences among stimulation-identified cortical naming sites in the temporal region. Epilepsy Behav. 2016;60:124-129. 

Before surgeons perform a resection involving the language-dominant hemisphere of a patient with epilepsy, they may do electrical stimulation mapping to identify a patient’s language-dominant hemisphere. Typically they will ask patients to identify objects to help locate the language cortex and then avoid resection in an area of the brain in which electrical stimulation makes it difficult for patients to name said objects. But because word production involves mechanisms that may be centered in more than one area of the brain, Hamberger et al tested locations that have been identified by stimulation as naming sites to look for disparities. Testing patients with refractory temporal lobe epilepsy who had subdural electrodes implanted, they discovered that stimulating naming sites in the superior temporary lobe was more likely to disrupt phonological processing but did not affect a patient’s ability to process semantic information. Stimulating the inferior temporal naming sites was more likely to impair semantic processing. 

Hamberger MJ, Miozzo M, Schevon CA, et al. Functional differences among stimulation-identified cortical naming sites in the temporal region. Epilepsy Behav. 2016;60:124-129. 

Publications
Publications
Article Type
Display Headline
Stimulation-identified Cortical Naming Sites Pose Unexpected Challenges
Display Headline
Stimulation-identified Cortical Naming Sites Pose Unexpected Challenges
Sections
Disallow All Ads
Alternative CME

Evaluating Alternatives to Open Surgical Resection for Epilepsy

Article Type
Changed
Thu, 12/15/2022 - 16:00
Display Headline
Evaluating Alternatives to Open Surgical Resection for Epilepsy
A recent review suggests clinicians should consider stereotactic electroencephalography, laser interstitial thermal therapy, and stereotactic radiosurgery

Open surgical resection is still considered the best approach for patients with epilepsy that do not respond well to medical therapy. But despite being considered the gold standard in neurosurgical care, the shortcomings of open surgical resection need to be addressed. McGovern and colleagues do so in a review published in Current Neurology and Neuroscience Reports. They point to the value of stereotactic electroencephalography, which can localize deep epileptic foci. Similarly laser interstitial thermal therapy (LITT) and stereotactic radiosurgery have advantages because they can ablate specific regions of the brain using minimally or non-invasive techniques. In the case of LITT, it can offer clinicians near real-time feedback on its effects. Neurostimulation is also worth consideration in select patients because it can reduce seizure occurrence without the need for ablation or resection

McGovern RA, Banks GP, McKhann GM 2nd. New techniques and progress in epilepsy surgery. Curr Neurol Neurosci Rep. 2016;16(7):65. 

References

Author and Disclosure Information

Publications
Sections
Author and Disclosure Information

Author and Disclosure Information

A recent review suggests clinicians should consider stereotactic electroencephalography, laser interstitial thermal therapy, and stereotactic radiosurgery
A recent review suggests clinicians should consider stereotactic electroencephalography, laser interstitial thermal therapy, and stereotactic radiosurgery

Open surgical resection is still considered the best approach for patients with epilepsy that do not respond well to medical therapy. But despite being considered the gold standard in neurosurgical care, the shortcomings of open surgical resection need to be addressed. McGovern and colleagues do so in a review published in Current Neurology and Neuroscience Reports. They point to the value of stereotactic electroencephalography, which can localize deep epileptic foci. Similarly laser interstitial thermal therapy (LITT) and stereotactic radiosurgery have advantages because they can ablate specific regions of the brain using minimally or non-invasive techniques. In the case of LITT, it can offer clinicians near real-time feedback on its effects. Neurostimulation is also worth consideration in select patients because it can reduce seizure occurrence without the need for ablation or resection

McGovern RA, Banks GP, McKhann GM 2nd. New techniques and progress in epilepsy surgery. Curr Neurol Neurosci Rep. 2016;16(7):65. 

Open surgical resection is still considered the best approach for patients with epilepsy that do not respond well to medical therapy. But despite being considered the gold standard in neurosurgical care, the shortcomings of open surgical resection need to be addressed. McGovern and colleagues do so in a review published in Current Neurology and Neuroscience Reports. They point to the value of stereotactic electroencephalography, which can localize deep epileptic foci. Similarly laser interstitial thermal therapy (LITT) and stereotactic radiosurgery have advantages because they can ablate specific regions of the brain using minimally or non-invasive techniques. In the case of LITT, it can offer clinicians near real-time feedback on its effects. Neurostimulation is also worth consideration in select patients because it can reduce seizure occurrence without the need for ablation or resection

McGovern RA, Banks GP, McKhann GM 2nd. New techniques and progress in epilepsy surgery. Curr Neurol Neurosci Rep. 2016;16(7):65. 

References

References

Publications
Publications
Article Type
Display Headline
Evaluating Alternatives to Open Surgical Resection for Epilepsy
Display Headline
Evaluating Alternatives to Open Surgical Resection for Epilepsy
Sections
Article Source

PURLs Copyright

Inside the Article

Disallow All Ads

Smartphone app aimed at disrupting first-episode psychosis shows promise

Intervening early will set stage for treatments
Article Type
Changed
Mon, 04/16/2018 - 13:55
Display Headline
Smartphone app aimed at disrupting first-episode psychosis shows promise

BETHESDA, MD. – What would the world look like for people with first-episode psychosis if they could be identified before they ended up in care? And what if once identified, they could begin receiving treatment immediately?

Those questions are not just hypothetical to Danielle Schlosser, PhD

Dr. Danielle Schlosser

Using online screening based on proven tools, followed by enrollment in a secured, closed community created through the use of a smartphone app, Dr. Schlosser and her colleagues are delivering remote care to people with first-episode psychosis, rather than making them come to the clinic.

“It’s controversial, but you’re not doing anything meaningful if you don’t stir things up,” Dr. Schlosser said in an interview. “Why do we have to have a one-size-fits-all model?”

Statistics from NIMH show that duration of psychosis before treatment is 1-3 years. People in the early stages of psychosis, typically in their late teens or early 20s, often do not find their way to care until after admittance through an emergency department or a brush with the law, Dr. Schlosser said. Once diagnosed, clinically meaningful improvements in outcomes in this cohort often are impeded by cognitive and motivational impairment, including fear of stigma, or logistical challenges such as finding reliable transportation to a treatment site. According to the World Health Organization, half of all people with schizophrenia globally are not receiving treatment.

“We can do better,” Dr. Schlosser said.

In the PRIME design trials, Dr. Schlosser and her colleagues are evaluating how “user-centered design” might improve treatment delivery. In stage 1 of the study, two design phases, each with a separate group of 10 participants with recent-onset schizophrenia, helped a global design firm called IDEO arrive at a product that Dr. Schlosser and her colleagues described in a study published online as “casual, friendly, and nonstigmatizing, which is in line with the recovery model of psychosis” (JMIR Res Protoc. 2016;5[2]:e77).

The app included short motivational texts from trained therapists, a feature for individualized goal setting in prognostically important psychosocial domains, opportunities for social networking via direct peer-to-peer messaging, and a community “moments feed” aimed at capturing and reinforcing rewarding experiences and achieving goals.

After 12 weeks of using the app, Dr. Schlosser and her coinvestigators found that trial participants, all of whom had been asked to use it at least once a week, had used it an average of once every other day and had actively engaged with its various features with every log-in. Retention and satisfaction was 100% in each group, and levels of engagement from stage 1 to stage 2 increased by what Dr. Schlosser and her coauthors reported as “two- or threefold” in almost each aspect of the platform.

Dr. Schlosser said such impressive results have continued now that the study has entered stage 2, which is being conducted across the United States, Canada, and Mexico. Fifty people have enrolled, Dr. Schlosser said.

“So far, we have a 93% retention in our clinical trial, which is very high for this population,” she said. “We’re also seeing that two-thirds of the population are self-referred. You just don’t see that in early psychosis research.”

She credits those results in part to the study’s online recruitment design, but Dr. Schlosser said the patient-reported outcomes are the most gratifying.

“We got a letter from a mom who said she wished we could have seen her daughter’s face when she saw that everyone in her [online] group looks ‘normal.’ Her daughter was looking for hope, and she found it mirrored back at her,” Dr. Schlosser said. “They are not their illness.”

A prime reason people with recent-onset schizophrenia don’t access formal treatment is the fear they will be stigmatized, Dr. Schlosser said. Furthermore, she said, since most of those affected are young, creative, and “antiestablishment” in their attitudes, they already deal with stigma. The most effective treatment experience needed to be based on those kinds of values, she said.

“They want more control in their lives, and they want to connect with others like them. The idea of seeking care in a traditional setting is a deterrent for these people,” Dr. Schlosser said in the interview. “So, rather than build a new building, we built them a digital platform.”

A few audience members expressed concern that such a treatment model may expose these patients to unnecessary harm. Dr. Schlosser replied in the interview that even in clinical settings, patients prescribed medications still may be noncompliant. “We are not the enforcers of medications. It’s still the patient’s choice. Maybe I am being too provocative, but if people don’t want to take medications, then we try to work with them where they are,” she said.

 

 

In addition to the support from the online community, trial participants spend an average of 20 minutes with the coaching staff, making the model a “very low resource” one, according to Dr. Schlosser.

PRIME is expected to complete in the spring 2018, at which time the data collected potentially will be used to refine user-designed apps for use in other mental disorders, Dr. Schlosser said.

“I want to promote a digital system of care so that we can move immediately from when we identify a person is at risk to immediately giving them support. I think we can improve things so that we don’t just have equivalent levels of care but optimal ones,” Dr. Schlosser told the audience. “What we have had as our system of care to date isn’t working.”

Dr. Schlosser did not have any relevant disclosures.

[email protected]

On Twitter @whitneymcknight

References

Body

One of the most compelling aspects to schizophrenia, the most serious of mental disorders, is its age of onset in late adolescence and early adulthood. Intervening early on has good logic and good sense on its side. Nothing can be more important than giving attention to patients and families early on. However, it still remain unknown what biological processes trigger the disorder, and to date there are no data suggesting that we are able to slow the illness progression. Nevertheless, focusing early will teach us about the illness and set the stage when the next generation of biological treatments emerge.

David Pickar, MD, is a psychiatrist and former (retired) director of intramural research at the National Institute of Mental Health. In addition, Dr. Pickar is adjunct professor of psychiatry at Johns Hopkins University, Baltimore, and at the Uniformed Services University of the Health Sciences, Bethesda, Md.

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Body

One of the most compelling aspects to schizophrenia, the most serious of mental disorders, is its age of onset in late adolescence and early adulthood. Intervening early on has good logic and good sense on its side. Nothing can be more important than giving attention to patients and families early on. However, it still remain unknown what biological processes trigger the disorder, and to date there are no data suggesting that we are able to slow the illness progression. Nevertheless, focusing early will teach us about the illness and set the stage when the next generation of biological treatments emerge.

David Pickar, MD, is a psychiatrist and former (retired) director of intramural research at the National Institute of Mental Health. In addition, Dr. Pickar is adjunct professor of psychiatry at Johns Hopkins University, Baltimore, and at the Uniformed Services University of the Health Sciences, Bethesda, Md.

Body

One of the most compelling aspects to schizophrenia, the most serious of mental disorders, is its age of onset in late adolescence and early adulthood. Intervening early on has good logic and good sense on its side. Nothing can be more important than giving attention to patients and families early on. However, it still remain unknown what biological processes trigger the disorder, and to date there are no data suggesting that we are able to slow the illness progression. Nevertheless, focusing early will teach us about the illness and set the stage when the next generation of biological treatments emerge.

David Pickar, MD, is a psychiatrist and former (retired) director of intramural research at the National Institute of Mental Health. In addition, Dr. Pickar is adjunct professor of psychiatry at Johns Hopkins University, Baltimore, and at the Uniformed Services University of the Health Sciences, Bethesda, Md.

Title
Intervening early will set stage for treatments
Intervening early will set stage for treatments

BETHESDA, MD. – What would the world look like for people with first-episode psychosis if they could be identified before they ended up in care? And what if once identified, they could begin receiving treatment immediately?

Those questions are not just hypothetical to Danielle Schlosser, PhD

Dr. Danielle Schlosser

Using online screening based on proven tools, followed by enrollment in a secured, closed community created through the use of a smartphone app, Dr. Schlosser and her colleagues are delivering remote care to people with first-episode psychosis, rather than making them come to the clinic.

“It’s controversial, but you’re not doing anything meaningful if you don’t stir things up,” Dr. Schlosser said in an interview. “Why do we have to have a one-size-fits-all model?”

Statistics from NIMH show that duration of psychosis before treatment is 1-3 years. People in the early stages of psychosis, typically in their late teens or early 20s, often do not find their way to care until after admittance through an emergency department or a brush with the law, Dr. Schlosser said. Once diagnosed, clinically meaningful improvements in outcomes in this cohort often are impeded by cognitive and motivational impairment, including fear of stigma, or logistical challenges such as finding reliable transportation to a treatment site. According to the World Health Organization, half of all people with schizophrenia globally are not receiving treatment.

“We can do better,” Dr. Schlosser said.

In the PRIME design trials, Dr. Schlosser and her colleagues are evaluating how “user-centered design” might improve treatment delivery. In stage 1 of the study, two design phases, each with a separate group of 10 participants with recent-onset schizophrenia, helped a global design firm called IDEO arrive at a product that Dr. Schlosser and her colleagues described in a study published online as “casual, friendly, and nonstigmatizing, which is in line with the recovery model of psychosis” (JMIR Res Protoc. 2016;5[2]:e77).

The app included short motivational texts from trained therapists, a feature for individualized goal setting in prognostically important psychosocial domains, opportunities for social networking via direct peer-to-peer messaging, and a community “moments feed” aimed at capturing and reinforcing rewarding experiences and achieving goals.

After 12 weeks of using the app, Dr. Schlosser and her coinvestigators found that trial participants, all of whom had been asked to use it at least once a week, had used it an average of once every other day and had actively engaged with its various features with every log-in. Retention and satisfaction was 100% in each group, and levels of engagement from stage 1 to stage 2 increased by what Dr. Schlosser and her coauthors reported as “two- or threefold” in almost each aspect of the platform.

Dr. Schlosser said such impressive results have continued now that the study has entered stage 2, which is being conducted across the United States, Canada, and Mexico. Fifty people have enrolled, Dr. Schlosser said.

“So far, we have a 93% retention in our clinical trial, which is very high for this population,” she said. “We’re also seeing that two-thirds of the population are self-referred. You just don’t see that in early psychosis research.”

She credits those results in part to the study’s online recruitment design, but Dr. Schlosser said the patient-reported outcomes are the most gratifying.

“We got a letter from a mom who said she wished we could have seen her daughter’s face when she saw that everyone in her [online] group looks ‘normal.’ Her daughter was looking for hope, and she found it mirrored back at her,” Dr. Schlosser said. “They are not their illness.”

A prime reason people with recent-onset schizophrenia don’t access formal treatment is the fear they will be stigmatized, Dr. Schlosser said. Furthermore, she said, since most of those affected are young, creative, and “antiestablishment” in their attitudes, they already deal with stigma. The most effective treatment experience needed to be based on those kinds of values, she said.

“They want more control in their lives, and they want to connect with others like them. The idea of seeking care in a traditional setting is a deterrent for these people,” Dr. Schlosser said in the interview. “So, rather than build a new building, we built them a digital platform.”

A few audience members expressed concern that such a treatment model may expose these patients to unnecessary harm. Dr. Schlosser replied in the interview that even in clinical settings, patients prescribed medications still may be noncompliant. “We are not the enforcers of medications. It’s still the patient’s choice. Maybe I am being too provocative, but if people don’t want to take medications, then we try to work with them where they are,” she said.

 

 

In addition to the support from the online community, trial participants spend an average of 20 minutes with the coaching staff, making the model a “very low resource” one, according to Dr. Schlosser.

PRIME is expected to complete in the spring 2018, at which time the data collected potentially will be used to refine user-designed apps for use in other mental disorders, Dr. Schlosser said.

“I want to promote a digital system of care so that we can move immediately from when we identify a person is at risk to immediately giving them support. I think we can improve things so that we don’t just have equivalent levels of care but optimal ones,” Dr. Schlosser told the audience. “What we have had as our system of care to date isn’t working.”

Dr. Schlosser did not have any relevant disclosures.

[email protected]

On Twitter @whitneymcknight

BETHESDA, MD. – What would the world look like for people with first-episode psychosis if they could be identified before they ended up in care? And what if once identified, they could begin receiving treatment immediately?

Those questions are not just hypothetical to Danielle Schlosser, PhD

Dr. Danielle Schlosser

Using online screening based on proven tools, followed by enrollment in a secured, closed community created through the use of a smartphone app, Dr. Schlosser and her colleagues are delivering remote care to people with first-episode psychosis, rather than making them come to the clinic.

“It’s controversial, but you’re not doing anything meaningful if you don’t stir things up,” Dr. Schlosser said in an interview. “Why do we have to have a one-size-fits-all model?”

Statistics from NIMH show that duration of psychosis before treatment is 1-3 years. People in the early stages of psychosis, typically in their late teens or early 20s, often do not find their way to care until after admittance through an emergency department or a brush with the law, Dr. Schlosser said. Once diagnosed, clinically meaningful improvements in outcomes in this cohort often are impeded by cognitive and motivational impairment, including fear of stigma, or logistical challenges such as finding reliable transportation to a treatment site. According to the World Health Organization, half of all people with schizophrenia globally are not receiving treatment.

“We can do better,” Dr. Schlosser said.

In the PRIME design trials, Dr. Schlosser and her colleagues are evaluating how “user-centered design” might improve treatment delivery. In stage 1 of the study, two design phases, each with a separate group of 10 participants with recent-onset schizophrenia, helped a global design firm called IDEO arrive at a product that Dr. Schlosser and her colleagues described in a study published online as “casual, friendly, and nonstigmatizing, which is in line with the recovery model of psychosis” (JMIR Res Protoc. 2016;5[2]:e77).

The app included short motivational texts from trained therapists, a feature for individualized goal setting in prognostically important psychosocial domains, opportunities for social networking via direct peer-to-peer messaging, and a community “moments feed” aimed at capturing and reinforcing rewarding experiences and achieving goals.

After 12 weeks of using the app, Dr. Schlosser and her coinvestigators found that trial participants, all of whom had been asked to use it at least once a week, had used it an average of once every other day and had actively engaged with its various features with every log-in. Retention and satisfaction was 100% in each group, and levels of engagement from stage 1 to stage 2 increased by what Dr. Schlosser and her coauthors reported as “two- or threefold” in almost each aspect of the platform.

Dr. Schlosser said such impressive results have continued now that the study has entered stage 2, which is being conducted across the United States, Canada, and Mexico. Fifty people have enrolled, Dr. Schlosser said.

“So far, we have a 93% retention in our clinical trial, which is very high for this population,” she said. “We’re also seeing that two-thirds of the population are self-referred. You just don’t see that in early psychosis research.”

She credits those results in part to the study’s online recruitment design, but Dr. Schlosser said the patient-reported outcomes are the most gratifying.

“We got a letter from a mom who said she wished we could have seen her daughter’s face when she saw that everyone in her [online] group looks ‘normal.’ Her daughter was looking for hope, and she found it mirrored back at her,” Dr. Schlosser said. “They are not their illness.”

A prime reason people with recent-onset schizophrenia don’t access formal treatment is the fear they will be stigmatized, Dr. Schlosser said. Furthermore, she said, since most of those affected are young, creative, and “antiestablishment” in their attitudes, they already deal with stigma. The most effective treatment experience needed to be based on those kinds of values, she said.

“They want more control in their lives, and they want to connect with others like them. The idea of seeking care in a traditional setting is a deterrent for these people,” Dr. Schlosser said in the interview. “So, rather than build a new building, we built them a digital platform.”

A few audience members expressed concern that such a treatment model may expose these patients to unnecessary harm. Dr. Schlosser replied in the interview that even in clinical settings, patients prescribed medications still may be noncompliant. “We are not the enforcers of medications. It’s still the patient’s choice. Maybe I am being too provocative, but if people don’t want to take medications, then we try to work with them where they are,” she said.

 

 

In addition to the support from the online community, trial participants spend an average of 20 minutes with the coaching staff, making the model a “very low resource” one, according to Dr. Schlosser.

PRIME is expected to complete in the spring 2018, at which time the data collected potentially will be used to refine user-designed apps for use in other mental disorders, Dr. Schlosser said.

“I want to promote a digital system of care so that we can move immediately from when we identify a person is at risk to immediately giving them support. I think we can improve things so that we don’t just have equivalent levels of care but optimal ones,” Dr. Schlosser told the audience. “What we have had as our system of care to date isn’t working.”

Dr. Schlosser did not have any relevant disclosures.

[email protected]

On Twitter @whitneymcknight

References

References

Publications
Publications
Topics
Article Type
Display Headline
Smartphone app aimed at disrupting first-episode psychosis shows promise
Display Headline
Smartphone app aimed at disrupting first-episode psychosis shows promise
Sections
Article Source

EXPERT ANALYSIS FROM THE 2016 NIMH CONFERENCE ON MENTAL HEALTH SERVICES RESEARCH

PURLs Copyright

Inside the Article

Disallow All Ads

Sunscreens safe in babies, children

Article Type
Changed
Fri, 01/18/2019 - 16:07
Display Headline
Sunscreens safe in babies, children

BOSTON – Despite what some popular online media outlets report, sunscreens are safe in children and can even be used on infants under 6 months of age when sun avoidance – the best approach to protecting babies from the damaging effects of the sun – is not possible, according to Mercedes E. Gonzalez, MD.

A quick Google search reveals numerous, widely-shared articles about the dangers lurking in one’s beach bag, and while many product labels recommend asking a doctor about whether the product is safe for babies under age 6 months, that’s only because most product safety studies didn’t include that age group, Dr. Gonzalez of the University of Miami said at the American Academy of Dermatology summer meeting.

©Steve Mason/Thinkstock

In fact, there is “nothing magical that happens” in infant skin after 6 months that makes sunscreen use safer, she said, explaining that infant skin is structurally and functionally different from adult skin, and that while gradual maturation takes place over time, thereby reducing susceptibility to percutaneous absorption of topically applied products, the risk is minimal even in babies younger than age 6 months.

The skin characteristics that make younger skin more susceptible to percutaneous absorption also make babies and children unusually susceptible to ultraviolet radiation and ultraviolet radiation–induced immunosuppression, for which the consequences are not fully understood, she said.

Among the more commonly cited sunscreen ingredients of concern are oxybenzone, or benzonephenone-3, and nanoparticles, she noted.

However, the overall consensus based on studies of oxybenzone is that aside from causing some cases of allergic and irritant contact dermatitis, the compound is safe; no harmful cause and effect relationship with oxybenzone and systemic side effects in humans have been reported, and periodic reviews by European, Australian and U.S. safety panels all conclude that it is safe.

Numerous studies of nanoparticles – such as nanosized zinc oxide and titanium dioxide – have shown that absorption is confined to the level of stratum corneum – even when skin barrier function has been altered, she said, noting that most are coated with aluminum oxide and SiO2 to minimize contact.

However, the safety of sunscreen shouldn’t be seen as license to ignore sun-exposure recommendations; sunscreen in infants should be considered “the last layer of protection,” used only on exposed areas when adequate clothing and shade are not available, according to a 2011 American Academy of Pediatrics statement (Pediatrics. 2011 Feb. doi: 10.1542/peds.2010-3501).

Efforts should be made to keep babies in the shade when outdoors whenever possible, especially during peak sun hours. Use sun-protective clothing, including hats, sunglasses, and long-sleeved shirts, Dr. Gonzalez advised.

When sunscreen is required, a broad-spectrum water-resistant product with an SPF of more than 30 is preferable.

“But the best sunscreen is the one you and your child will use,” she said.

Mineral-based products are less irritating and thus may be a preferred option for children with atopic dermatitis, she added.

Advise parents to apply sunscreen to all areas not protected by their child’s clothing, paying particular attention to vulnerable areas, including the back of the neck, ears, and dorsal feet. Reapply before going outdoors, and then again every 2 hours, she advised.

“So the overall answer to the parents’ question, ‘Are sunscreens safe?’ ... the overwhelming answer here is yes, and the weight of the evidence shows there is no proven harm from sunscreen use especially when used properly,” she said.

Provide specific guidance for pediatric sunscreen use

In the face of conflicting information about sunscreen safety and efficacy, parents with questions about sunscreen are looking for specific direction, Dr. Gonzalez said.

She said she finds it helpful to teach them about the importance of reading labels. That is, looking at the ingredients, and looking for SPF above 30, broad-spectrum coverage, and water resistance. She also recommends providing a list or images of good options, and circling the specific preferred products.

For babies, she finds stick sunscreens most useful for application.

“I generally don’t recommend sprays, but if they’re going to use a spray – and parents love sprays because they are easy to apply – I recommend the ones that have some zinc oxide in them, so that when they apply them they can see where they’re going on the skin,” Dr. Gonzalez said.

Tell patients to apply sunscreen before leaving the house, she advised, adding that making sunscreen application part of a daily routine helps encourage healthy behaviors, as does allowing children, at the right age, to participate in sunscreen application.

For adolescents, avoid scare tactics such as warning about skin cancer. Rather, focus on benefits of avoiding the sun, help them find a product they like by finding out why they don’t like a particular product and recommending an alternative, then following up on that when they come back in, she suggested.

 

 

“I really try to address it at every visit,” Dr. Gonzalez said.

“Finally, the most important message is that sunscreen is really just one part of complete sun protection,” she said, noting that specific information about where to buy sun-protective clothing and hats is also important.

Dr. Gonzalez reported serving as a speaker and/or advisory board member and receiving honoraria from Pierre Fabre Dermatologie, Anacor Pharmaceuticals, Encore Dermatology, and PuraCap Pharmaceutical.

[email protected]

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

BOSTON – Despite what some popular online media outlets report, sunscreens are safe in children and can even be used on infants under 6 months of age when sun avoidance – the best approach to protecting babies from the damaging effects of the sun – is not possible, according to Mercedes E. Gonzalez, MD.

A quick Google search reveals numerous, widely-shared articles about the dangers lurking in one’s beach bag, and while many product labels recommend asking a doctor about whether the product is safe for babies under age 6 months, that’s only because most product safety studies didn’t include that age group, Dr. Gonzalez of the University of Miami said at the American Academy of Dermatology summer meeting.

©Steve Mason/Thinkstock

In fact, there is “nothing magical that happens” in infant skin after 6 months that makes sunscreen use safer, she said, explaining that infant skin is structurally and functionally different from adult skin, and that while gradual maturation takes place over time, thereby reducing susceptibility to percutaneous absorption of topically applied products, the risk is minimal even in babies younger than age 6 months.

The skin characteristics that make younger skin more susceptible to percutaneous absorption also make babies and children unusually susceptible to ultraviolet radiation and ultraviolet radiation–induced immunosuppression, for which the consequences are not fully understood, she said.

Among the more commonly cited sunscreen ingredients of concern are oxybenzone, or benzonephenone-3, and nanoparticles, she noted.

However, the overall consensus based on studies of oxybenzone is that aside from causing some cases of allergic and irritant contact dermatitis, the compound is safe; no harmful cause and effect relationship with oxybenzone and systemic side effects in humans have been reported, and periodic reviews by European, Australian and U.S. safety panels all conclude that it is safe.

Numerous studies of nanoparticles – such as nanosized zinc oxide and titanium dioxide – have shown that absorption is confined to the level of stratum corneum – even when skin barrier function has been altered, she said, noting that most are coated with aluminum oxide and SiO2 to minimize contact.

However, the safety of sunscreen shouldn’t be seen as license to ignore sun-exposure recommendations; sunscreen in infants should be considered “the last layer of protection,” used only on exposed areas when adequate clothing and shade are not available, according to a 2011 American Academy of Pediatrics statement (Pediatrics. 2011 Feb. doi: 10.1542/peds.2010-3501).

Efforts should be made to keep babies in the shade when outdoors whenever possible, especially during peak sun hours. Use sun-protective clothing, including hats, sunglasses, and long-sleeved shirts, Dr. Gonzalez advised.

When sunscreen is required, a broad-spectrum water-resistant product with an SPF of more than 30 is preferable.

“But the best sunscreen is the one you and your child will use,” she said.

Mineral-based products are less irritating and thus may be a preferred option for children with atopic dermatitis, she added.

Advise parents to apply sunscreen to all areas not protected by their child’s clothing, paying particular attention to vulnerable areas, including the back of the neck, ears, and dorsal feet. Reapply before going outdoors, and then again every 2 hours, she advised.

“So the overall answer to the parents’ question, ‘Are sunscreens safe?’ ... the overwhelming answer here is yes, and the weight of the evidence shows there is no proven harm from sunscreen use especially when used properly,” she said.

Provide specific guidance for pediatric sunscreen use

In the face of conflicting information about sunscreen safety and efficacy, parents with questions about sunscreen are looking for specific direction, Dr. Gonzalez said.

She said she finds it helpful to teach them about the importance of reading labels. That is, looking at the ingredients, and looking for SPF above 30, broad-spectrum coverage, and water resistance. She also recommends providing a list or images of good options, and circling the specific preferred products.

For babies, she finds stick sunscreens most useful for application.

“I generally don’t recommend sprays, but if they’re going to use a spray – and parents love sprays because they are easy to apply – I recommend the ones that have some zinc oxide in them, so that when they apply them they can see where they’re going on the skin,” Dr. Gonzalez said.

Tell patients to apply sunscreen before leaving the house, she advised, adding that making sunscreen application part of a daily routine helps encourage healthy behaviors, as does allowing children, at the right age, to participate in sunscreen application.

For adolescents, avoid scare tactics such as warning about skin cancer. Rather, focus on benefits of avoiding the sun, help them find a product they like by finding out why they don’t like a particular product and recommending an alternative, then following up on that when they come back in, she suggested.

 

 

“I really try to address it at every visit,” Dr. Gonzalez said.

“Finally, the most important message is that sunscreen is really just one part of complete sun protection,” she said, noting that specific information about where to buy sun-protective clothing and hats is also important.

Dr. Gonzalez reported serving as a speaker and/or advisory board member and receiving honoraria from Pierre Fabre Dermatologie, Anacor Pharmaceuticals, Encore Dermatology, and PuraCap Pharmaceutical.

[email protected]

BOSTON – Despite what some popular online media outlets report, sunscreens are safe in children and can even be used on infants under 6 months of age when sun avoidance – the best approach to protecting babies from the damaging effects of the sun – is not possible, according to Mercedes E. Gonzalez, MD.

A quick Google search reveals numerous, widely-shared articles about the dangers lurking in one’s beach bag, and while many product labels recommend asking a doctor about whether the product is safe for babies under age 6 months, that’s only because most product safety studies didn’t include that age group, Dr. Gonzalez of the University of Miami said at the American Academy of Dermatology summer meeting.

©Steve Mason/Thinkstock

In fact, there is “nothing magical that happens” in infant skin after 6 months that makes sunscreen use safer, she said, explaining that infant skin is structurally and functionally different from adult skin, and that while gradual maturation takes place over time, thereby reducing susceptibility to percutaneous absorption of topically applied products, the risk is minimal even in babies younger than age 6 months.

The skin characteristics that make younger skin more susceptible to percutaneous absorption also make babies and children unusually susceptible to ultraviolet radiation and ultraviolet radiation–induced immunosuppression, for which the consequences are not fully understood, she said.

Among the more commonly cited sunscreen ingredients of concern are oxybenzone, or benzonephenone-3, and nanoparticles, she noted.

However, the overall consensus based on studies of oxybenzone is that aside from causing some cases of allergic and irritant contact dermatitis, the compound is safe; no harmful cause and effect relationship with oxybenzone and systemic side effects in humans have been reported, and periodic reviews by European, Australian and U.S. safety panels all conclude that it is safe.

Numerous studies of nanoparticles – such as nanosized zinc oxide and titanium dioxide – have shown that absorption is confined to the level of stratum corneum – even when skin barrier function has been altered, she said, noting that most are coated with aluminum oxide and SiO2 to minimize contact.

However, the safety of sunscreen shouldn’t be seen as license to ignore sun-exposure recommendations; sunscreen in infants should be considered “the last layer of protection,” used only on exposed areas when adequate clothing and shade are not available, according to a 2011 American Academy of Pediatrics statement (Pediatrics. 2011 Feb. doi: 10.1542/peds.2010-3501).

Efforts should be made to keep babies in the shade when outdoors whenever possible, especially during peak sun hours. Use sun-protective clothing, including hats, sunglasses, and long-sleeved shirts, Dr. Gonzalez advised.

When sunscreen is required, a broad-spectrum water-resistant product with an SPF of more than 30 is preferable.

“But the best sunscreen is the one you and your child will use,” she said.

Mineral-based products are less irritating and thus may be a preferred option for children with atopic dermatitis, she added.

Advise parents to apply sunscreen to all areas not protected by their child’s clothing, paying particular attention to vulnerable areas, including the back of the neck, ears, and dorsal feet. Reapply before going outdoors, and then again every 2 hours, she advised.

“So the overall answer to the parents’ question, ‘Are sunscreens safe?’ ... the overwhelming answer here is yes, and the weight of the evidence shows there is no proven harm from sunscreen use especially when used properly,” she said.

Provide specific guidance for pediatric sunscreen use

In the face of conflicting information about sunscreen safety and efficacy, parents with questions about sunscreen are looking for specific direction, Dr. Gonzalez said.

She said she finds it helpful to teach them about the importance of reading labels. That is, looking at the ingredients, and looking for SPF above 30, broad-spectrum coverage, and water resistance. She also recommends providing a list or images of good options, and circling the specific preferred products.

For babies, she finds stick sunscreens most useful for application.

“I generally don’t recommend sprays, but if they’re going to use a spray – and parents love sprays because they are easy to apply – I recommend the ones that have some zinc oxide in them, so that when they apply them they can see where they’re going on the skin,” Dr. Gonzalez said.

Tell patients to apply sunscreen before leaving the house, she advised, adding that making sunscreen application part of a daily routine helps encourage healthy behaviors, as does allowing children, at the right age, to participate in sunscreen application.

For adolescents, avoid scare tactics such as warning about skin cancer. Rather, focus on benefits of avoiding the sun, help them find a product they like by finding out why they don’t like a particular product and recommending an alternative, then following up on that when they come back in, she suggested.

 

 

“I really try to address it at every visit,” Dr. Gonzalez said.

“Finally, the most important message is that sunscreen is really just one part of complete sun protection,” she said, noting that specific information about where to buy sun-protective clothing and hats is also important.

Dr. Gonzalez reported serving as a speaker and/or advisory board member and receiving honoraria from Pierre Fabre Dermatologie, Anacor Pharmaceuticals, Encore Dermatology, and PuraCap Pharmaceutical.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Sunscreens safe in babies, children
Display Headline
Sunscreens safe in babies, children
Sections
Article Source

EXPERT ANALYSIS FROM THE AAD SUMMER ACADEMY 2016

PURLs Copyright

Inside the Article

Disallow All Ads