User login
Musculoskeletal ultrasound training now offered in nearly all U.S. rheumatology fellowships
Musculoskeletal ultrasound (MSUS) fellowship opportunities are growing among rheumatology programs across the country as professionals push for more standardized education, according to a survey of fellowship program directors.
Rise in use of MSUS among rheumatologists is spurring more comprehensive education for providers to acquire these skill sets, which researchers have gathered will only become more prevalent.
The investigators sent two surveys to 113 rheumatology fellowship program directors. In the first survey, responses from the directors of 108 programs indicated that 101 (94%) offered MSUS programs (Arthritis Care Res. 2017 Aug 4. doi: 10.1002/acr.23336).
While this number has increased dramatically since a 2013 survey showed that 60% offered MSUS programs, the new survey found that 66% of respondents would prefer for the program to be optional, as opposed to a formal part of the fellowship program.
This sentiment for nonformal education programs was mirrored in the second survey specifically targeting the 101 programs that were known to provide some sort of MSUS education.
Among the 74 program directors who responded, 30 (41%) reported having a formal curriculum, while 44 (59%) did not, citing a major barrier being a lack of interested fellows to learn the material (P = .012)
Another major barrier, according to Dr. Torralba and her colleagues, is access to faculty with enough teaching experience to properly teach MSUS skills, with 62 (84%) reporting having no or only one faculty member with MSUS certification (P = .049).
Programs without proper faculty available and even those with available faculty are choosing to outsource lessons to expensive teaching programs such as the Ultrasound School of North American Rheumatologists (USSONAR) fellowship course, according to Dr. Torralba and her associates.
“While cost of external courses can be prohibitive, (expenses for a 2- to 4-day course costs between $1,500 and $4,000), programs may augment MSUS teaching using these courses for several reasons,” according to Dr. Torralba and her colleagues. [These include] insufficient number of teaching faculty, limited time or support for faculty to deliver all educational content, inadequate confidence or competency for faculty to teach content, and utilization of external materials to bolster resources.”
While these barriers will still need addressing, according to Dr. Torralba and her colleagues, half of responders noted previous barriers such as political pushback and lack of fellow interest are starting to recede, giving more room for programs to start developing MSUS programs that researchers assert are necessary for future developing rheumatologists.
“A standardized MSUS curriculum developed and endorsed by program directors and MSUS lead educators is now reasonably within sights,” the investigators wrote. “We need to work together to proactively champion MSUS education for both faculty and fellows who desire to attain this skill set.”
This study was limited by the self-reporting nature of the survey sent, as well as the small population of the sample. Researchers were also forced to rely on program directors’ perception of how effective their MSUS programs were instead of asking those participating in the programs directly.
The researchers reported no relevant financial disclosures.
[email protected]
On Twitter @eaztweets
Musculoskeletal ultrasound (MSUS) fellowship opportunities are growing among rheumatology programs across the country as professionals push for more standardized education, according to a survey of fellowship program directors.
Rise in use of MSUS among rheumatologists is spurring more comprehensive education for providers to acquire these skill sets, which researchers have gathered will only become more prevalent.
The investigators sent two surveys to 113 rheumatology fellowship program directors. In the first survey, responses from the directors of 108 programs indicated that 101 (94%) offered MSUS programs (Arthritis Care Res. 2017 Aug 4. doi: 10.1002/acr.23336).
While this number has increased dramatically since a 2013 survey showed that 60% offered MSUS programs, the new survey found that 66% of respondents would prefer for the program to be optional, as opposed to a formal part of the fellowship program.
This sentiment for nonformal education programs was mirrored in the second survey specifically targeting the 101 programs that were known to provide some sort of MSUS education.
Among the 74 program directors who responded, 30 (41%) reported having a formal curriculum, while 44 (59%) did not, citing a major barrier being a lack of interested fellows to learn the material (P = .012)
Another major barrier, according to Dr. Torralba and her colleagues, is access to faculty with enough teaching experience to properly teach MSUS skills, with 62 (84%) reporting having no or only one faculty member with MSUS certification (P = .049).
Programs without proper faculty available and even those with available faculty are choosing to outsource lessons to expensive teaching programs such as the Ultrasound School of North American Rheumatologists (USSONAR) fellowship course, according to Dr. Torralba and her associates.
“While cost of external courses can be prohibitive, (expenses for a 2- to 4-day course costs between $1,500 and $4,000), programs may augment MSUS teaching using these courses for several reasons,” according to Dr. Torralba and her colleagues. [These include] insufficient number of teaching faculty, limited time or support for faculty to deliver all educational content, inadequate confidence or competency for faculty to teach content, and utilization of external materials to bolster resources.”
While these barriers will still need addressing, according to Dr. Torralba and her colleagues, half of responders noted previous barriers such as political pushback and lack of fellow interest are starting to recede, giving more room for programs to start developing MSUS programs that researchers assert are necessary for future developing rheumatologists.
“A standardized MSUS curriculum developed and endorsed by program directors and MSUS lead educators is now reasonably within sights,” the investigators wrote. “We need to work together to proactively champion MSUS education for both faculty and fellows who desire to attain this skill set.”
This study was limited by the self-reporting nature of the survey sent, as well as the small population of the sample. Researchers were also forced to rely on program directors’ perception of how effective their MSUS programs were instead of asking those participating in the programs directly.
The researchers reported no relevant financial disclosures.
[email protected]
On Twitter @eaztweets
Musculoskeletal ultrasound (MSUS) fellowship opportunities are growing among rheumatology programs across the country as professionals push for more standardized education, according to a survey of fellowship program directors.
Rise in use of MSUS among rheumatologists is spurring more comprehensive education for providers to acquire these skill sets, which researchers have gathered will only become more prevalent.
The investigators sent two surveys to 113 rheumatology fellowship program directors. In the first survey, responses from the directors of 108 programs indicated that 101 (94%) offered MSUS programs (Arthritis Care Res. 2017 Aug 4. doi: 10.1002/acr.23336).
While this number has increased dramatically since a 2013 survey showed that 60% offered MSUS programs, the new survey found that 66% of respondents would prefer for the program to be optional, as opposed to a formal part of the fellowship program.
This sentiment for nonformal education programs was mirrored in the second survey specifically targeting the 101 programs that were known to provide some sort of MSUS education.
Among the 74 program directors who responded, 30 (41%) reported having a formal curriculum, while 44 (59%) did not, citing a major barrier being a lack of interested fellows to learn the material (P = .012)
Another major barrier, according to Dr. Torralba and her colleagues, is access to faculty with enough teaching experience to properly teach MSUS skills, with 62 (84%) reporting having no or only one faculty member with MSUS certification (P = .049).
Programs without proper faculty available and even those with available faculty are choosing to outsource lessons to expensive teaching programs such as the Ultrasound School of North American Rheumatologists (USSONAR) fellowship course, according to Dr. Torralba and her associates.
“While cost of external courses can be prohibitive, (expenses for a 2- to 4-day course costs between $1,500 and $4,000), programs may augment MSUS teaching using these courses for several reasons,” according to Dr. Torralba and her colleagues. [These include] insufficient number of teaching faculty, limited time or support for faculty to deliver all educational content, inadequate confidence or competency for faculty to teach content, and utilization of external materials to bolster resources.”
While these barriers will still need addressing, according to Dr. Torralba and her colleagues, half of responders noted previous barriers such as political pushback and lack of fellow interest are starting to recede, giving more room for programs to start developing MSUS programs that researchers assert are necessary for future developing rheumatologists.
“A standardized MSUS curriculum developed and endorsed by program directors and MSUS lead educators is now reasonably within sights,” the investigators wrote. “We need to work together to proactively champion MSUS education for both faculty and fellows who desire to attain this skill set.”
This study was limited by the self-reporting nature of the survey sent, as well as the small population of the sample. Researchers were also forced to rely on program directors’ perception of how effective their MSUS programs were instead of asking those participating in the programs directly.
The researchers reported no relevant financial disclosures.
[email protected]
On Twitter @eaztweets
FROM ARTHRITIS CARE & RESEARCH
Key clinical point: 
Major finding: Of 108 program directors who responded to a survey, 101 (94%) offered a musculoskeletal ultrasound fellowship.
Data source: Survey of 113 rheumatology fellowship program directors gathered from the Fellowship and Residency Electronic Interactive Database Access (FREIDA) online database.
Disclosures: The investigators reported no relevant financial disclosures.
The Authors Reply, “What Can Be Done to Maintain Positive Patient Experience and Improve Residents’ Satisfaction?” and “Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial”
We thank Talari et al. for their comments in response to our randomized controlled trial evaluating the impact of standardized rounds on patient, attending, and trainee satisfaction. We agree that many factors beyond rounding structure contribute to resident satisfaction, including those highlighted by the authors, and would enthusiastically welcome additional research in this realm.
Because our study intervention addressed rounding structure, we elected to specifically focus on satisfaction with rounds, both from the physician and patient perspectives. We chose to ask about patient satisfaction with attending rounds, as opposed to more generic measures of patient satisfaction, to allow for more direct comparison between attending/resident responses and patient responses. Certainly, there are many other factors that affect overall patient experience. Surveys such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) and Press Ganey do not specifically address rounds, are often completed several weeks following hospitalization, and may have low response rates. Relying on such global assessments of patient experience may also reduce the power of the study. Although patient responses to our survey may be higher than scores seen with HCAHPS and Press Ganey, the randomized nature of our study helps control for other differences in the hospitalization experience unrelated to rounding structure. Similarly, because physician teams were randomly assigned, differences in census were not a major factor in the study. Physician blinding was not possible due to the nature of the intervention, which may have affected the satisfaction reports from attendings and residents. For our primary outcome (patient satisfaction with rounds), patients were blinded to the nature of our intervention, and all study team members involved in data collection and statistical analyses were blinded to study arm allocation.
In summary, we feel that evaluating the trade-offs and consequences of interventions should be examined from multiple perspectives, and we welcome additional investigations in this area.
We thank Talari et al. for their comments in response to our randomized controlled trial evaluating the impact of standardized rounds on patient, attending, and trainee satisfaction. We agree that many factors beyond rounding structure contribute to resident satisfaction, including those highlighted by the authors, and would enthusiastically welcome additional research in this realm.
Because our study intervention addressed rounding structure, we elected to specifically focus on satisfaction with rounds, both from the physician and patient perspectives. We chose to ask about patient satisfaction with attending rounds, as opposed to more generic measures of patient satisfaction, to allow for more direct comparison between attending/resident responses and patient responses. Certainly, there are many other factors that affect overall patient experience. Surveys such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) and Press Ganey do not specifically address rounds, are often completed several weeks following hospitalization, and may have low response rates. Relying on such global assessments of patient experience may also reduce the power of the study. Although patient responses to our survey may be higher than scores seen with HCAHPS and Press Ganey, the randomized nature of our study helps control for other differences in the hospitalization experience unrelated to rounding structure. Similarly, because physician teams were randomly assigned, differences in census were not a major factor in the study. Physician blinding was not possible due to the nature of the intervention, which may have affected the satisfaction reports from attendings and residents. For our primary outcome (patient satisfaction with rounds), patients were blinded to the nature of our intervention, and all study team members involved in data collection and statistical analyses were blinded to study arm allocation.
In summary, we feel that evaluating the trade-offs and consequences of interventions should be examined from multiple perspectives, and we welcome additional investigations in this area.
We thank Talari et al. for their comments in response to our randomized controlled trial evaluating the impact of standardized rounds on patient, attending, and trainee satisfaction. We agree that many factors beyond rounding structure contribute to resident satisfaction, including those highlighted by the authors, and would enthusiastically welcome additional research in this realm.
Because our study intervention addressed rounding structure, we elected to specifically focus on satisfaction with rounds, both from the physician and patient perspectives. We chose to ask about patient satisfaction with attending rounds, as opposed to more generic measures of patient satisfaction, to allow for more direct comparison between attending/resident responses and patient responses. Certainly, there are many other factors that affect overall patient experience. Surveys such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) and Press Ganey do not specifically address rounds, are often completed several weeks following hospitalization, and may have low response rates. Relying on such global assessments of patient experience may also reduce the power of the study. Although patient responses to our survey may be higher than scores seen with HCAHPS and Press Ganey, the randomized nature of our study helps control for other differences in the hospitalization experience unrelated to rounding structure. Similarly, because physician teams were randomly assigned, differences in census were not a major factor in the study. Physician blinding was not possible due to the nature of the intervention, which may have affected the satisfaction reports from attendings and residents. For our primary outcome (patient satisfaction with rounds), patients were blinded to the nature of our intervention, and all study team members involved in data collection and statistical analyses were blinded to study arm allocation.
In summary, we feel that evaluating the trade-offs and consequences of interventions should be examined from multiple perspectives, and we welcome additional investigations in this area.
© 2017 Society of Hospital Medicine
What Can Be Done to Maintain Positive Patient Experience and Improve Residents’ Satisfaction? In Reference to: “Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial”
We read the article by Monash et al.1 published in the March 2017 issue with great interest. This randomized study showed a discrepancy between patients’ and residents’ satisfaction with standardized rounds; for example, residents reported less autonomy, efficiency, teaching, and longer time of rounds.
We agree that letting residents lead the rounds with minimal participation of an attending (only when needed) may improve resident satisfaction. Other factors, such as quality of teaching, positive comments to learners during bedside rounds (whenever appropriate), and a positive attending attitude, might be helpful.2,3 We believe that the adaptation of such a model through the prism of residents’ benefit will lead to better satisfaction among trainees.
On the other hand, we note that the nature of the study might have exaggerated patient satisfaction when compared with real-world surveys.4 The survey appears to focus only on attending rounds and did not consider other factors like hospitality, pain control, etc. A low patient census and lack of double blinding are other potential factors.
In conclusion, we want to congratulate the authors for raising this important topic and showing positive patients’ satisfaction with standardized rounds on teaching services. Further research should focus on improving residents’ satisfaction without compromising patients’ experiences.
1. Monash B, Najafi N, Mourad M, et al. Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial. J Hosp Med. 2017;12(3):143-149. PubMed
2. Williams KN, Ramani S, Fraser B, Orlander JD. Improving bedside teaching: findings from a focus group study of learners. Acad Med. 2008;83(3):257-264. PubMed
3. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
4. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590-593. PubMed
We read the article by Monash et al.1 published in the March 2017 issue with great interest. This randomized study showed a discrepancy between patients’ and residents’ satisfaction with standardized rounds; for example, residents reported less autonomy, efficiency, teaching, and longer time of rounds.
We agree that letting residents lead the rounds with minimal participation of an attending (only when needed) may improve resident satisfaction. Other factors, such as quality of teaching, positive comments to learners during bedside rounds (whenever appropriate), and a positive attending attitude, might be helpful.2,3 We believe that the adaptation of such a model through the prism of residents’ benefit will lead to better satisfaction among trainees.
On the other hand, we note that the nature of the study might have exaggerated patient satisfaction when compared with real-world surveys.4 The survey appears to focus only on attending rounds and did not consider other factors like hospitality, pain control, etc. A low patient census and lack of double blinding are other potential factors.
In conclusion, we want to congratulate the authors for raising this important topic and showing positive patients’ satisfaction with standardized rounds on teaching services. Further research should focus on improving residents’ satisfaction without compromising patients’ experiences.
We read the article by Monash et al.1 published in the March 2017 issue with great interest. This randomized study showed a discrepancy between patients’ and residents’ satisfaction with standardized rounds; for example, residents reported less autonomy, efficiency, teaching, and longer time of rounds.
We agree that letting residents lead the rounds with minimal participation of an attending (only when needed) may improve resident satisfaction. Other factors, such as quality of teaching, positive comments to learners during bedside rounds (whenever appropriate), and a positive attending attitude, might be helpful.2,3 We believe that the adaptation of such a model through the prism of residents’ benefit will lead to better satisfaction among trainees.
On the other hand, we note that the nature of the study might have exaggerated patient satisfaction when compared with real-world surveys.4 The survey appears to focus only on attending rounds and did not consider other factors like hospitality, pain control, etc. A low patient census and lack of double blinding are other potential factors.
In conclusion, we want to congratulate the authors for raising this important topic and showing positive patients’ satisfaction with standardized rounds on teaching services. Further research should focus on improving residents’ satisfaction without compromising patients’ experiences.
1. Monash B, Najafi N, Mourad M, et al. Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial. J Hosp Med. 2017;12(3):143-149. PubMed
2. Williams KN, Ramani S, Fraser B, Orlander JD. Improving bedside teaching: findings from a focus group study of learners. Acad Med. 2008;83(3):257-264. PubMed
3. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
4. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590-593. PubMed
1. Monash B, Najafi N, Mourad M, et al. Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial. J Hosp Med. 2017;12(3):143-149. PubMed
2. Williams KN, Ramani S, Fraser B, Orlander JD. Improving bedside teaching: findings from a focus group study of learners. Acad Med. 2008;83(3):257-264. PubMed
3. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
4. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590-593. PubMed
The Authors Reply: “Cost and Utility of Thrombophilia Testing”
We thank Dr. Berse and colleagues for their correspondence about our paper.1,2 We are pleased they agreed with our conclusion: Thrombophilia testing has limited clinical utility in most inpatient settings.
Berse and colleagues critiqued details of our methodology in calculating payer cost, including how we estimated the number of Medicare claims for thrombophilia testing. We estimated that there were at least 280,000 Medicare claims in 2014 using CodeMap® (Wheaton Partners, LLC, Schaumburg, IL), a dataset of utilization data from the Physician Supplier Procedure Summary Master File from all Medicare Part B carriers.3 This estimate was similar to that reported in a previous publication.4
Thus, regardless of the precise estimates, even a conservative estimate of 33 to 80 million dollars of unnecessary spending is far too much. Rather, it is a perfect example of “Things We Do for No Reason.”
Disclosure
Nothing to report.
1. Petrilli CM, Mack M, Petrilli JJ, Hickner A, Saint S, Chopra V. Understanding the role of physician attire on patient perceptions: a systematic review of the literature--targeting attire to improve likelihood of rapport (TAILOR) investigators. BMJ Open. 2015;5(1):e006578. DOI:10.1136/bmjopen-2014-006578. PubMed
2. Berse B, Lynch JA, Bowen S, Grosse SD. In Reference to: “Cost and Utility of Thrombophilia Testing.” J Hosp Med. 2017;12(9):783. 
3. CodeMap® https://www.codemap.com/. Accessed March 2, 2017.
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-7. DOI:10.1309/KV06-32LJ-8EDM-EWQT. PubMed
We thank Dr. Berse and colleagues for their correspondence about our paper.1,2 We are pleased they agreed with our conclusion: Thrombophilia testing has limited clinical utility in most inpatient settings.
Berse and colleagues critiqued details of our methodology in calculating payer cost, including how we estimated the number of Medicare claims for thrombophilia testing. We estimated that there were at least 280,000 Medicare claims in 2014 using CodeMap® (Wheaton Partners, LLC, Schaumburg, IL), a dataset of utilization data from the Physician Supplier Procedure Summary Master File from all Medicare Part B carriers.3 This estimate was similar to that reported in a previous publication.4
Thus, regardless of the precise estimates, even a conservative estimate of 33 to 80 million dollars of unnecessary spending is far too much. Rather, it is a perfect example of “Things We Do for No Reason.”
Disclosure
Nothing to report.
We thank Dr. Berse and colleagues for their correspondence about our paper.1,2 We are pleased they agreed with our conclusion: Thrombophilia testing has limited clinical utility in most inpatient settings.
Berse and colleagues critiqued details of our methodology in calculating payer cost, including how we estimated the number of Medicare claims for thrombophilia testing. We estimated that there were at least 280,000 Medicare claims in 2014 using CodeMap® (Wheaton Partners, LLC, Schaumburg, IL), a dataset of utilization data from the Physician Supplier Procedure Summary Master File from all Medicare Part B carriers.3 This estimate was similar to that reported in a previous publication.4
Thus, regardless of the precise estimates, even a conservative estimate of 33 to 80 million dollars of unnecessary spending is far too much. Rather, it is a perfect example of “Things We Do for No Reason.”
Disclosure
Nothing to report.
1. Petrilli CM, Mack M, Petrilli JJ, Hickner A, Saint S, Chopra V. Understanding the role of physician attire on patient perceptions: a systematic review of the literature--targeting attire to improve likelihood of rapport (TAILOR) investigators. BMJ Open. 2015;5(1):e006578. DOI:10.1136/bmjopen-2014-006578. PubMed
2. Berse B, Lynch JA, Bowen S, Grosse SD. In Reference to: “Cost and Utility of Thrombophilia Testing.” J Hosp Med. 2017;12(9):783. 
3. CodeMap® https://www.codemap.com/. Accessed March 2, 2017.
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-7. DOI:10.1309/KV06-32LJ-8EDM-EWQT. PubMed
1. Petrilli CM, Mack M, Petrilli JJ, Hickner A, Saint S, Chopra V. Understanding the role of physician attire on patient perceptions: a systematic review of the literature--targeting attire to improve likelihood of rapport (TAILOR) investigators. BMJ Open. 2015;5(1):e006578. DOI:10.1136/bmjopen-2014-006578. PubMed
2. Berse B, Lynch JA, Bowen S, Grosse SD. In Reference to: “Cost and Utility of Thrombophilia Testing.” J Hosp Med. 2017;12(9):783. 
3. CodeMap® https://www.codemap.com/. Accessed March 2, 2017.
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-7. DOI:10.1309/KV06-32LJ-8EDM-EWQT. PubMed
© 2017 Society of Hospital Medicine
In Reference to: “Cost and Utility of Thrombophilia Testing”
The article by Petrilli et al. points to the important but complicated issue of ordering laboratory testing for thrombophilia despite multiple guidelines that dispute the clinical utility of such testing for many indications.1 We question the basis of these authors’ assertion that Medicare spends $300 to $672 million for thrombophilia testing annually. They arrived at this figure by multiplying the price of a thrombophilia test panel (between $1100 and $2400) by the number of annual Medicare claims for thrombophilia analysis, which they estimated at 280,000. The price of the panel is derived from two papers: (1) a 2001 review2 that lists prices of various thrombophilia-related tests adding up to $1782, and (2) a 2006 evaluation by Somma et al.3 of thrombophilia screening at one hospital in New York in 2005. The latter paper refers to various thrombophilia panels from Quest Diagnostics with list prices ranging from $1311 to $2429. However, the repertoire of available test panels and their prices have changed over the last decade. The cost evaluation of thrombophilia testing should be based on actual current payments for tests, and not on list prices for laboratory offerings from over a decade ago. Several laboratories offer mutational analysis of 3 genes—F5, F2, and MTHFR—as a thrombophilia risk panel. Based on the Current Procedural Terminology (CPT) codes listed by the test suppliers (81240, 81241, and 81291), the average Medicare payment for the combination of these 3 markers in 2013 was $172.4 A broader panel of several biochemical, immunological, and genetic assays had a maximum Medicare payment in 2015 of $405 (Table).5
In conclusion, the cost evaluation of thrombophilia screening is more challenging than the calculation by Petrilli et al. suggests.1 Even if Medicare paid as much as $400 per individual tested and assuming up to 200,000 individuals underwent thrombophilia testing per year, the aggregate Medicare expenditure would have been no more than roughly $80 million. Thus, the estimated range in the article appears to have overstated actual Medicare expenditures by an order of magnitude. This does not take away from their overall conclusion that payers are burdened with significant expenditures for laboratory testing that may not present clinical value for many patients.6 We need research into the patterns of utilization as well as improvements in documentation of expenditures associated with these tests.
Disclosure
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention, the Department of Veterans Affairs, or the United States government. The autho
1. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
2. Abramson N, Abramson S. Hypercoagulability: clinical assessment and treatment. South Med J. 2001;94(10):1013-1020. PubMed
3. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
4. Lynch JA, Berse B, Dotson WD, Khoury MJ, Coomer N, Kautter J. Utilization of genetic tests: Analysis of gene-specific billing in Medicare claims data [Published online ahead of print January 26, 2017]. Genet Med. 2017. doi: 10.1038/gim.2016.209. PubMed
5. Centers for Medicare and Medicaid Services. Clinical Laboratory Fee Schedule 2016. https://www.cms.gov/Medicare/Medicare-fee-for-service-Payment/clinicallabfeesched/index.html. Accessed on December 20, 2016.
6. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
The article by Petrilli et al. points to the important but complicated issue of ordering laboratory testing for thrombophilia despite multiple guidelines that dispute the clinical utility of such testing for many indications.1 We question the basis of these authors’ assertion that Medicare spends $300 to $672 million for thrombophilia testing annually. They arrived at this figure by multiplying the price of a thrombophilia test panel (between $1100 and $2400) by the number of annual Medicare claims for thrombophilia analysis, which they estimated at 280,000. The price of the panel is derived from two papers: (1) a 2001 review2 that lists prices of various thrombophilia-related tests adding up to $1782, and (2) a 2006 evaluation by Somma et al.3 of thrombophilia screening at one hospital in New York in 2005. The latter paper refers to various thrombophilia panels from Quest Diagnostics with list prices ranging from $1311 to $2429. However, the repertoire of available test panels and their prices have changed over the last decade. The cost evaluation of thrombophilia testing should be based on actual current payments for tests, and not on list prices for laboratory offerings from over a decade ago. Several laboratories offer mutational analysis of 3 genes—F5, F2, and MTHFR—as a thrombophilia risk panel. Based on the Current Procedural Terminology (CPT) codes listed by the test suppliers (81240, 81241, and 81291), the average Medicare payment for the combination of these 3 markers in 2013 was $172.4 A broader panel of several biochemical, immunological, and genetic assays had a maximum Medicare payment in 2015 of $405 (Table).5
In conclusion, the cost evaluation of thrombophilia screening is more challenging than the calculation by Petrilli et al. suggests.1 Even if Medicare paid as much as $400 per individual tested and assuming up to 200,000 individuals underwent thrombophilia testing per year, the aggregate Medicare expenditure would have been no more than roughly $80 million. Thus, the estimated range in the article appears to have overstated actual Medicare expenditures by an order of magnitude. This does not take away from their overall conclusion that payers are burdened with significant expenditures for laboratory testing that may not present clinical value for many patients.6 We need research into the patterns of utilization as well as improvements in documentation of expenditures associated with these tests.
Disclosure
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention, the Department of Veterans Affairs, or the United States government. The autho
The article by Petrilli et al. points to the important but complicated issue of ordering laboratory testing for thrombophilia despite multiple guidelines that dispute the clinical utility of such testing for many indications.1 We question the basis of these authors’ assertion that Medicare spends $300 to $672 million for thrombophilia testing annually. They arrived at this figure by multiplying the price of a thrombophilia test panel (between $1100 and $2400) by the number of annual Medicare claims for thrombophilia analysis, which they estimated at 280,000. The price of the panel is derived from two papers: (1) a 2001 review2 that lists prices of various thrombophilia-related tests adding up to $1782, and (2) a 2006 evaluation by Somma et al.3 of thrombophilia screening at one hospital in New York in 2005. The latter paper refers to various thrombophilia panels from Quest Diagnostics with list prices ranging from $1311 to $2429. However, the repertoire of available test panels and their prices have changed over the last decade. The cost evaluation of thrombophilia testing should be based on actual current payments for tests, and not on list prices for laboratory offerings from over a decade ago. Several laboratories offer mutational analysis of 3 genes—F5, F2, and MTHFR—as a thrombophilia risk panel. Based on the Current Procedural Terminology (CPT) codes listed by the test suppliers (81240, 81241, and 81291), the average Medicare payment for the combination of these 3 markers in 2013 was $172.4 A broader panel of several biochemical, immunological, and genetic assays had a maximum Medicare payment in 2015 of $405 (Table).5
In conclusion, the cost evaluation of thrombophilia screening is more challenging than the calculation by Petrilli et al. suggests.1 Even if Medicare paid as much as $400 per individual tested and assuming up to 200,000 individuals underwent thrombophilia testing per year, the aggregate Medicare expenditure would have been no more than roughly $80 million. Thus, the estimated range in the article appears to have overstated actual Medicare expenditures by an order of magnitude. This does not take away from their overall conclusion that payers are burdened with significant expenditures for laboratory testing that may not present clinical value for many patients.6 We need research into the patterns of utilization as well as improvements in documentation of expenditures associated with these tests.
Disclosure
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention, the Department of Veterans Affairs, or the United States government. The autho
1. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
2. Abramson N, Abramson S. Hypercoagulability: clinical assessment and treatment. South Med J. 2001;94(10):1013-1020. PubMed
3. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
4. Lynch JA, Berse B, Dotson WD, Khoury MJ, Coomer N, Kautter J. Utilization of genetic tests: Analysis of gene-specific billing in Medicare claims data [Published online ahead of print January 26, 2017]. Genet Med. 2017. doi: 10.1038/gim.2016.209. PubMed
5. Centers for Medicare and Medicaid Services. Clinical Laboratory Fee Schedule 2016. https://www.cms.gov/Medicare/Medicare-fee-for-service-Payment/clinicallabfeesched/index.html. Accessed on December 20, 2016.
6. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
1. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
2. Abramson N, Abramson S. Hypercoagulability: clinical assessment and treatment. South Med J. 2001;94(10):1013-1020. PubMed
3. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
4. Lynch JA, Berse B, Dotson WD, Khoury MJ, Coomer N, Kautter J. Utilization of genetic tests: Analysis of gene-specific billing in Medicare claims data [Published online ahead of print January 26, 2017]. Genet Med. 2017. doi: 10.1038/gim.2016.209. PubMed
5. Centers for Medicare and Medicaid Services. Clinical Laboratory Fee Schedule 2016. https://www.cms.gov/Medicare/Medicare-fee-for-service-Payment/clinicallabfeesched/index.html. Accessed on December 20, 2016.
6. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
© 2017 Society of Hospital Medicine
Reducing Routine Labs—Teaching Residents Restraint
Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.
Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.
Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7
Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10
In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.
This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.
However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.
A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.
Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.
Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.
Disclosure
The authors declare no conflicts of interest.
1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74. 
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed
Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.
Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.
Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7
Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10
In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.
This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.
However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.
A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.
Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.
Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.
Disclosure
The authors declare no conflicts of interest.
Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.
Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.
Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7
Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10
In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.
This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.
However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.
A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.
Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.
Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.
Disclosure
The authors declare no conflicts of interest.
1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74. 
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed
1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74. 
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed
© 2017 Society of Hospital Medicine
Does the Week-End Justify the Means?
Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?
Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.
Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.
A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11
Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.
To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.
One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.
Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.
But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.
The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”
Disclosure
The authors have nothing to disclose.
1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed
Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?
Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.
Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.
A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11
Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.
To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.
One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.
Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.
But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.
The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”
Disclosure
The authors have nothing to disclose.
Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?
Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.
Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.
A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11
Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.
To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.
One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.
Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.
But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.
The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”
Disclosure
The authors have nothing to disclose.
1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed
1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed
© 2017 Society of Hospital Medicine
Inpatient Thrombophilia Testing: At What Expense?
Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.
However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.
In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.
These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7
It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8
How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.
In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.
Disclosure
Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.
1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed
Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.
However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.
In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.
These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7
It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8
How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.
In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.
Disclosure
Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.
Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.
However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.
In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.
These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7
It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8
How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.
In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.
Disclosure
Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.
1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed
1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed
© 2017 Society of Hospital Medicine
Certification of Point-of-Care Ultrasound Competency
Any conversation about point-of-care ultrasound (POCUS) inevitably brings up discussion about credentialing, privileging, and certification. While credentialing and privileging are institution-specific processes, competency certification can be extramural through a national board or intramural through an institutional process.
Some institutions have begun to develop intramural certification pathways for POCUS competency in order to grant privileges to hospitalists. In this edition of the Journal of Hospital Medicine, Mathews and Zwank2 describe a multidisciplinary collaboration to provide POCUS training, intramural certification, and quality assurance for hospitalists at one hospital in Minnesota. This model serves as a real-world example of how institutions are addressing the need to certify hospitalists in basic POCUS competency. After engaging stakeholders from radiology, critical care, emergency medicine, and cardiology, institutional standards were developed and hospitalists were assessed for basic POCUS competency. Certification included assessments of hospitalists’ knowledge, image acquisition, and image interpretation skills. The model described by Mathews did not assess competency in clinical integration but laid the groundwork for future evaluation of clinical outcomes in the cohort of certified hospitalists.
Although experts may not agree on all aspects of competency in POCUS, most will agree with the basic principles outlined by Mathews and Zwank. Initial certification should be based on training and an initial assessment of competency. Components of training should include ultrasound didactics, mentored hands-on practice, independent hands-on practice, and image interpretation practice. Ongoing certification should be based on quality assurance incorporated with an ongoing assessment of skills. Additionally, most experts will agree that competency can be recognized, and formative and summative assessments that combine a gestalt of provider skills with quantitative scoring systems using checklists are likely the best approach.
The real question is, what is the goal of certification of POCUS competency? Development of an institutional certification process demands substantive resources of the institution and time of the providers. Institutions would have to invest in equipment and staff to operate a full-time certification program, given the large number of providers that use POCUS and justify why substantive resources are being dedicated to certify POCUS skills and not others. Providers may be dissuaded from using POCUS if certification requirements are burdensome, which has potential negative consequences, such as reverting back to performing bedside procedures without ultrasound guidance or referring all patients to interventional radiology.
Conceptually, one may speculate that certification is required for providers to bill for POCUS exams, but certification is not required to bill, although institutions may require certification before granting privileges to use POCUS. However, based on the emergency medicine experience, a specialty that has been using POCUS for more than 20 years, billing may not be the main driver of POCUS use. A recent review of 2012 Medicare data revealed that <1% of emergency medicine providers received reimbursement for limited ultrasound exams.3 Despite the Accreditation Council for Graduate Medical Education (ACGME) requirement for POCUS competency of all graduating emergency medicine residents since 2001 and the increasing POCUS use reported by emergency medicine physicians,4,5 most emergency medicine physicians are not billing for POCUS exams. Maybe use of POCUS as a “quick look” or extension of the physical examination is more common than previously thought. Although billing for POCUS exams can generate some clinical revenue, the benefits for the healthcare system by expediting care,6,7 reducing ancillary testing,8,9 and reducing procedural complications10,11 likely outweigh the small gains from billing for limited ultrasound exams. As healthcare payment models evolve to reward healthcare systems that achieve good outcomes rather than services rendered, certification for the sole purpose of billing may become obsolete. Furthermore, concerns about billing increasing medical liability from using POCUS are likely overstated because few lawsuits have resulted from missed diagnoses by POCUS, and most lawsuits have been from failure to perform a POCUS exam in a timely manner.12,13
Many medical students graduating today have had some training in POCUS14 and, as this new generation of physicians enters the workforce, they will likely view POCUS as part of their routine bedside evaluation of patients. If POCUS training is integrated into medical school and residency curricula, and national board certification incorporates basic POCUS competency, then most institutions may no longer feel obligated to certify POCUS competency locally, and institutional certification programs, such as the one described by Mathews and Zwank, would become obsolete.
For now, until all providers enter the workforce with basic competency in POCUS and medical culture accepts that ultrasound is a diagnostic tool available to any trained provider, hospitalists may need to provide proof of their competence through intramural or extramural certification. The work of Mathews and Zwank provides an example of how local certification processes can be established. In a future edition of the Journal of Hospital Medicine, the Society of Hospital Medicine Point-of-Care Ultrasound Task Force will present a position statement with recommendations for certification of competency in bedside ultrasound-guided procedures.
Disclosure
Nilam Soni receives support from the U.S. Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P. Lucas receives support from the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086). The contents of this publication do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
1. Bahner DP, Hughes D, Royall NA. I-AIM: a novel model for teaching and performing focused sonography. J Ultrasound Med. 2012;31:295-300. PubMed
2. Mathews BK, Zwank M. Hospital Medicine Point of Care Ultrasound Credentialing: An Example Protocol. J Hosp Med. 2017;12(9):767-772. PubMed
3. Hall MK, Hall J, Gross CP, et al. Use of Point-of-Care Ultrasound in the Emergency Department: Insights From the 2012 Medicare National Payment Data Set. J Ultrasound Med. 2016;35:2467-2474. PubMed
4. Amini R, Wyman MT, Hernandez NC, Guisto JA, Adhikari S. Use of Emergency Ultrasound in Arizona Community Emergency Departments. J Ultrasound Med. 2017;36(5):913-921. PubMed
5. Herbst MK, Camargo CA, Jr., Perez A, Moore CL. Use of Point-of-Care Ultrasound in Connecticut Emergency Departments. J Emerg Med. 2015;48:191-196. PubMed
6. Kory PD, Pellecchia CM, Shiloh AL, Mayo PH, DiBello C, Koenig S. Accuracy of ultrasonography performed by critical care physicians for the diagnosis of DVT. Chest. 2011;139:538-542. PubMed
7. Lucas BP, Candotti C, Margeta B, et al. Hand-carried echocardiography by hospitalists: a randomized trial. Am J Med. 2011;124:766-774. PubMed
8. Oks M, Cleven KL, Cardenas-Garcia J, et al. The effect of point-of-care ultrasonography on imaging studies in the medical ICU: a comparative study. Chest. 2014;146:1574-1577. PubMed
9. Koenig S, Chandra S, Alaverdian A, Dibello C, Mayo PH, Narasimhan M. Ultrasound assessment of pulmonary embolism in patients receiving CT pulmonary angiography. Chest. 2014;145:818-823. PubMed
10. Mercaldi CJ, Lanes SF. Ultrasound guidance decreases complications and improves the cost of care among patients undergoing thoracentesis and paracentesis. Chest. 2013;143:532-538. PubMed
11. Patel PA, Ernst FR, Gunnarsson CL. Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures. J Clin Ultrasound. 2012;40:135-141. PubMed
12. Stolz L, O’Brien KM, Miller ML, Winters-Brown ND, Blaivas M, Adhikari S. A review of lawsuits related to point-of-care emergency ultrasound applications. West J Emerg Med. 2015;16:1-4. PubMed
13. Blaivas M, Pawl R. Analysis of lawsuits filed against emergency physicians for point-of-care emergency ultrasound examination performance and interpretation over a 20-year period. Am J Emerg Med. 2012;30:338-341. PubMed
14. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89:1681-1686. PubMed
Any conversation about point-of-care ultrasound (POCUS) inevitably brings up discussion about credentialing, privileging, and certification. While credentialing and privileging are institution-specific processes, competency certification can be extramural through a national board or intramural through an institutional process.
Some institutions have begun to develop intramural certification pathways for POCUS competency in order to grant privileges to hospitalists. In this edition of the Journal of Hospital Medicine, Mathews and Zwank2 describe a multidisciplinary collaboration to provide POCUS training, intramural certification, and quality assurance for hospitalists at one hospital in Minnesota. This model serves as a real-world example of how institutions are addressing the need to certify hospitalists in basic POCUS competency. After engaging stakeholders from radiology, critical care, emergency medicine, and cardiology, institutional standards were developed and hospitalists were assessed for basic POCUS competency. Certification included assessments of hospitalists’ knowledge, image acquisition, and image interpretation skills. The model described by Mathews did not assess competency in clinical integration but laid the groundwork for future evaluation of clinical outcomes in the cohort of certified hospitalists.
Although experts may not agree on all aspects of competency in POCUS, most will agree with the basic principles outlined by Mathews and Zwank. Initial certification should be based on training and an initial assessment of competency. Components of training should include ultrasound didactics, mentored hands-on practice, independent hands-on practice, and image interpretation practice. Ongoing certification should be based on quality assurance incorporated with an ongoing assessment of skills. Additionally, most experts will agree that competency can be recognized, and formative and summative assessments that combine a gestalt of provider skills with quantitative scoring systems using checklists are likely the best approach.
The real question is, what is the goal of certification of POCUS competency? Development of an institutional certification process demands substantive resources of the institution and time of the providers. Institutions would have to invest in equipment and staff to operate a full-time certification program, given the large number of providers that use POCUS and justify why substantive resources are being dedicated to certify POCUS skills and not others. Providers may be dissuaded from using POCUS if certification requirements are burdensome, which has potential negative consequences, such as reverting back to performing bedside procedures without ultrasound guidance or referring all patients to interventional radiology.
Conceptually, one may speculate that certification is required for providers to bill for POCUS exams, but certification is not required to bill, although institutions may require certification before granting privileges to use POCUS. However, based on the emergency medicine experience, a specialty that has been using POCUS for more than 20 years, billing may not be the main driver of POCUS use. A recent review of 2012 Medicare data revealed that <1% of emergency medicine providers received reimbursement for limited ultrasound exams.3 Despite the Accreditation Council for Graduate Medical Education (ACGME) requirement for POCUS competency of all graduating emergency medicine residents since 2001 and the increasing POCUS use reported by emergency medicine physicians,4,5 most emergency medicine physicians are not billing for POCUS exams. Maybe use of POCUS as a “quick look” or extension of the physical examination is more common than previously thought. Although billing for POCUS exams can generate some clinical revenue, the benefits for the healthcare system by expediting care,6,7 reducing ancillary testing,8,9 and reducing procedural complications10,11 likely outweigh the small gains from billing for limited ultrasound exams. As healthcare payment models evolve to reward healthcare systems that achieve good outcomes rather than services rendered, certification for the sole purpose of billing may become obsolete. Furthermore, concerns about billing increasing medical liability from using POCUS are likely overstated because few lawsuits have resulted from missed diagnoses by POCUS, and most lawsuits have been from failure to perform a POCUS exam in a timely manner.12,13
Many medical students graduating today have had some training in POCUS14 and, as this new generation of physicians enters the workforce, they will likely view POCUS as part of their routine bedside evaluation of patients. If POCUS training is integrated into medical school and residency curricula, and national board certification incorporates basic POCUS competency, then most institutions may no longer feel obligated to certify POCUS competency locally, and institutional certification programs, such as the one described by Mathews and Zwank, would become obsolete.
For now, until all providers enter the workforce with basic competency in POCUS and medical culture accepts that ultrasound is a diagnostic tool available to any trained provider, hospitalists may need to provide proof of their competence through intramural or extramural certification. The work of Mathews and Zwank provides an example of how local certification processes can be established. In a future edition of the Journal of Hospital Medicine, the Society of Hospital Medicine Point-of-Care Ultrasound Task Force will present a position statement with recommendations for certification of competency in bedside ultrasound-guided procedures.
Disclosure
Nilam Soni receives support from the U.S. Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P. Lucas receives support from the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086). The contents of this publication do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
Any conversation about point-of-care ultrasound (POCUS) inevitably brings up discussion about credentialing, privileging, and certification. While credentialing and privileging are institution-specific processes, competency certification can be extramural through a national board or intramural through an institutional process.
Some institutions have begun to develop intramural certification pathways for POCUS competency in order to grant privileges to hospitalists. In this edition of the Journal of Hospital Medicine, Mathews and Zwank2 describe a multidisciplinary collaboration to provide POCUS training, intramural certification, and quality assurance for hospitalists at one hospital in Minnesota. This model serves as a real-world example of how institutions are addressing the need to certify hospitalists in basic POCUS competency. After engaging stakeholders from radiology, critical care, emergency medicine, and cardiology, institutional standards were developed and hospitalists were assessed for basic POCUS competency. Certification included assessments of hospitalists’ knowledge, image acquisition, and image interpretation skills. The model described by Mathews did not assess competency in clinical integration but laid the groundwork for future evaluation of clinical outcomes in the cohort of certified hospitalists.
Although experts may not agree on all aspects of competency in POCUS, most will agree with the basic principles outlined by Mathews and Zwank. Initial certification should be based on training and an initial assessment of competency. Components of training should include ultrasound didactics, mentored hands-on practice, independent hands-on practice, and image interpretation practice. Ongoing certification should be based on quality assurance incorporated with an ongoing assessment of skills. Additionally, most experts will agree that competency can be recognized, and formative and summative assessments that combine a gestalt of provider skills with quantitative scoring systems using checklists are likely the best approach.
The real question is, what is the goal of certification of POCUS competency? Development of an institutional certification process demands substantive resources of the institution and time of the providers. Institutions would have to invest in equipment and staff to operate a full-time certification program, given the large number of providers that use POCUS and justify why substantive resources are being dedicated to certify POCUS skills and not others. Providers may be dissuaded from using POCUS if certification requirements are burdensome, which has potential negative consequences, such as reverting back to performing bedside procedures without ultrasound guidance or referring all patients to interventional radiology.
Conceptually, one may speculate that certification is required for providers to bill for POCUS exams, but certification is not required to bill, although institutions may require certification before granting privileges to use POCUS. However, based on the emergency medicine experience, a specialty that has been using POCUS for more than 20 years, billing may not be the main driver of POCUS use. A recent review of 2012 Medicare data revealed that <1% of emergency medicine providers received reimbursement for limited ultrasound exams.3 Despite the Accreditation Council for Graduate Medical Education (ACGME) requirement for POCUS competency of all graduating emergency medicine residents since 2001 and the increasing POCUS use reported by emergency medicine physicians,4,5 most emergency medicine physicians are not billing for POCUS exams. Maybe use of POCUS as a “quick look” or extension of the physical examination is more common than previously thought. Although billing for POCUS exams can generate some clinical revenue, the benefits for the healthcare system by expediting care,6,7 reducing ancillary testing,8,9 and reducing procedural complications10,11 likely outweigh the small gains from billing for limited ultrasound exams. As healthcare payment models evolve to reward healthcare systems that achieve good outcomes rather than services rendered, certification for the sole purpose of billing may become obsolete. Furthermore, concerns about billing increasing medical liability from using POCUS are likely overstated because few lawsuits have resulted from missed diagnoses by POCUS, and most lawsuits have been from failure to perform a POCUS exam in a timely manner.12,13
Many medical students graduating today have had some training in POCUS14 and, as this new generation of physicians enters the workforce, they will likely view POCUS as part of their routine bedside evaluation of patients. If POCUS training is integrated into medical school and residency curricula, and national board certification incorporates basic POCUS competency, then most institutions may no longer feel obligated to certify POCUS competency locally, and institutional certification programs, such as the one described by Mathews and Zwank, would become obsolete.
For now, until all providers enter the workforce with basic competency in POCUS and medical culture accepts that ultrasound is a diagnostic tool available to any trained provider, hospitalists may need to provide proof of their competence through intramural or extramural certification. The work of Mathews and Zwank provides an example of how local certification processes can be established. In a future edition of the Journal of Hospital Medicine, the Society of Hospital Medicine Point-of-Care Ultrasound Task Force will present a position statement with recommendations for certification of competency in bedside ultrasound-guided procedures.
Disclosure
Nilam Soni receives support from the U.S. Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P. Lucas receives support from the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086). The contents of this publication do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
1. Bahner DP, Hughes D, Royall NA. I-AIM: a novel model for teaching and performing focused sonography. J Ultrasound Med. 2012;31:295-300. PubMed
2. Mathews BK, Zwank M. Hospital Medicine Point of Care Ultrasound Credentialing: An Example Protocol. J Hosp Med. 2017;12(9):767-772. PubMed
3. Hall MK, Hall J, Gross CP, et al. Use of Point-of-Care Ultrasound in the Emergency Department: Insights From the 2012 Medicare National Payment Data Set. J Ultrasound Med. 2016;35:2467-2474. PubMed
4. Amini R, Wyman MT, Hernandez NC, Guisto JA, Adhikari S. Use of Emergency Ultrasound in Arizona Community Emergency Departments. J Ultrasound Med. 2017;36(5):913-921. PubMed
5. Herbst MK, Camargo CA, Jr., Perez A, Moore CL. Use of Point-of-Care Ultrasound in Connecticut Emergency Departments. J Emerg Med. 2015;48:191-196. PubMed
6. Kory PD, Pellecchia CM, Shiloh AL, Mayo PH, DiBello C, Koenig S. Accuracy of ultrasonography performed by critical care physicians for the diagnosis of DVT. Chest. 2011;139:538-542. PubMed
7. Lucas BP, Candotti C, Margeta B, et al. Hand-carried echocardiography by hospitalists: a randomized trial. Am J Med. 2011;124:766-774. PubMed
8. Oks M, Cleven KL, Cardenas-Garcia J, et al. The effect of point-of-care ultrasonography on imaging studies in the medical ICU: a comparative study. Chest. 2014;146:1574-1577. PubMed
9. Koenig S, Chandra S, Alaverdian A, Dibello C, Mayo PH, Narasimhan M. Ultrasound assessment of pulmonary embolism in patients receiving CT pulmonary angiography. Chest. 2014;145:818-823. PubMed
10. Mercaldi CJ, Lanes SF. Ultrasound guidance decreases complications and improves the cost of care among patients undergoing thoracentesis and paracentesis. Chest. 2013;143:532-538. PubMed
11. Patel PA, Ernst FR, Gunnarsson CL. Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures. J Clin Ultrasound. 2012;40:135-141. PubMed
12. Stolz L, O’Brien KM, Miller ML, Winters-Brown ND, Blaivas M, Adhikari S. A review of lawsuits related to point-of-care emergency ultrasound applications. West J Emerg Med. 2015;16:1-4. PubMed
13. Blaivas M, Pawl R. Analysis of lawsuits filed against emergency physicians for point-of-care emergency ultrasound examination performance and interpretation over a 20-year period. Am J Emerg Med. 2012;30:338-341. PubMed
14. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89:1681-1686. PubMed
1. Bahner DP, Hughes D, Royall NA. I-AIM: a novel model for teaching and performing focused sonography. J Ultrasound Med. 2012;31:295-300. PubMed
2. Mathews BK, Zwank M. Hospital Medicine Point of Care Ultrasound Credentialing: An Example Protocol. J Hosp Med. 2017;12(9):767-772. PubMed
3. Hall MK, Hall J, Gross CP, et al. Use of Point-of-Care Ultrasound in the Emergency Department: Insights From the 2012 Medicare National Payment Data Set. J Ultrasound Med. 2016;35:2467-2474. PubMed
4. Amini R, Wyman MT, Hernandez NC, Guisto JA, Adhikari S. Use of Emergency Ultrasound in Arizona Community Emergency Departments. J Ultrasound Med. 2017;36(5):913-921. PubMed
5. Herbst MK, Camargo CA, Jr., Perez A, Moore CL. Use of Point-of-Care Ultrasound in Connecticut Emergency Departments. J Emerg Med. 2015;48:191-196. PubMed
6. Kory PD, Pellecchia CM, Shiloh AL, Mayo PH, DiBello C, Koenig S. Accuracy of ultrasonography performed by critical care physicians for the diagnosis of DVT. Chest. 2011;139:538-542. PubMed
7. Lucas BP, Candotti C, Margeta B, et al. Hand-carried echocardiography by hospitalists: a randomized trial. Am J Med. 2011;124:766-774. PubMed
8. Oks M, Cleven KL, Cardenas-Garcia J, et al. The effect of point-of-care ultrasonography on imaging studies in the medical ICU: a comparative study. Chest. 2014;146:1574-1577. PubMed
9. Koenig S, Chandra S, Alaverdian A, Dibello C, Mayo PH, Narasimhan M. Ultrasound assessment of pulmonary embolism in patients receiving CT pulmonary angiography. Chest. 2014;145:818-823. PubMed
10. Mercaldi CJ, Lanes SF. Ultrasound guidance decreases complications and improves the cost of care among patients undergoing thoracentesis and paracentesis. Chest. 2013;143:532-538. PubMed
11. Patel PA, Ernst FR, Gunnarsson CL. Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures. J Clin Ultrasound. 2012;40:135-141. PubMed
12. Stolz L, O’Brien KM, Miller ML, Winters-Brown ND, Blaivas M, Adhikari S. A review of lawsuits related to point-of-care emergency ultrasound applications. West J Emerg Med. 2015;16:1-4. PubMed
13. Blaivas M, Pawl R. Analysis of lawsuits filed against emergency physicians for point-of-care emergency ultrasound examination performance and interpretation over a 20-year period. Am J Emerg Med. 2012;30:338-341. PubMed
14. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89:1681-1686. PubMed
© 2017 Society of Hospital Medicine
A Video Is Worth a Thousand Words
There is no doubt about the importance of assessing, documenting, and honoring patient wishes regarding care. For hospitalized patients, code status is a critical treatment preference to document given that the need for cardiopulmonary resuscitation (CPR) arises suddenly, outcomes are often poor, and the default is for patients to receive the treatment unless they actively decline it. Hospitalists are expected to document code status for every hospitalized patient, but admission code status conversations are often brief—and that might be all right. A code status discussion for a 70-year-old man with no chronic medical problems and excellent functional status who has been admitted for pain after a motor vehicle accident may require only an introduction to the concept of advance care planning, the importance of having a surrogate, and confirmation of full code status. On the other hand, a 45-year-old woman with metastatic pancreatic cancer would likely benefit from a family meeting in which the hospitalist could review her disease course and prognosis, assess her values and priorities in the context of her advanced illness, make treatment recommendations—including code status—that are consistent with her values, and elicit questions.1,2 We need to free up hospitalists from spending time discussing code status with every patient so that they can spend more time in quality goals of care discussions with seriously ill patients. The paradigm of the one doctor—one patient admission code status conversation for every patient is no longer realistic.
As reported by Merino and colleagues in this issue of JHM, video decision aids about CPR for hospitalized patients can offer an innovative solution to determining code status for hospitalized patients.3 The authors conducted a prospective, randomized controlled trial, which enrolled older adults admitted to the hospital medicine service at the Veteran’s Administration (VA) Hospital in Minneapolis. Participants (N = 119) were randomized to usual care or to watch a 6-minute video that explained code status options, used a mannequin to illustrate a mock code, and provided information about potential complications and survival rates. Patients who watched the video were more likely to choose do not resuscitate/do not intubate status, with a large effect size (56% in the intervention group vs. 17% in the control group, P < 0.00001).
This study adds to a growing body of literature about this powerful modality to assist with advanced care planning. Over the past 10 years, studies—conducted primarily by Volandes, El-Jawahri, and colleagues—have demonstrated how video decision aids impact the care that patients want in the setting of cancer, heart failure, serious illness with short prognosis, and future dementia.4-9 This literature has also shown that video decision aids can increase patients’ knowledge about CPR and increase the stability of decisions over time. Further, video decision aids have been well accepted by patients, who report that they would recommended such videos to others. This body of evidence underscores the potential of video decision aids to improve concordance between patient preferences and care provided, which is key given the longstanding and widespread concern about patients receiving care that is inconsistent with their values at the end of life.10 In short, video decision aids work.
Merino and colleagues are the first to examine the use of a video decision aid about code status in a general population of older adults on a hospital medicine service and the second to integrate such a video into usual inpatient care, which are important advancements.2,3 There are several issues that warrant further consideration prior to widely disseminating such a video, however. As the authors note, the participants in this VA study were overwhelmingly white men and their average age was 75. Further, the authors found a nonsignificant trend towards patients in the intervention group having less trust that “my doctors and healthcare team want what is best for me” (76% in the intervention group vs. 93% in the control group; P = 0.083). Decision making about life-sustaining therapies and reactions to communication about serious illness are heavily influenced by cultural and socioeconomic factors, including health literacy.11 It will be important to seek feedback from a diverse group of patients and families to ensure that the video decision aid is interpreted accurately, renders decisions that are consistent with patients’ values, and does not negatively impact the clinician-patient relationship.12 Additionally, as the above cases illustrate, code status discussions should be tailored to patient factors, including illness severity and point in the disease course. Hospitalists will ultimately benefit from having access to multiple different videos about a range of advance care planning topics that can be used when appropriate.
In addition to selecting the right video for the right patient, the next challenge for hospitalists and health systems will be how to implement them within real-world clinical care and a broader approach to advance care planning. There are technical and logistical challenges to displaying videos in hospital rooms, and more significant challenges in ensuring timely follow-up discussions, communication of patients’ dynamic care preferences to their surrogates, changes to inpatient orders, documentation in the electronic medical record where it can be easily found in the future, and completion of advance directives and Physician Orders for Life Sustaining Treatment forms to communicate patients’ goals of care beyond the hospital and health system. Each of these steps is critical and is supported through videos and activities in the free, patient-facing, PREPARE web-based tool (https://www.prepareforyourcare.org/).2,13,14
The ubiquitous presence of videos in our lives speaks to their power to engage and affect us. Video decision aids provide detailed explanations and vivid images that convey more than words can alone. While there is more work to be done to ensure videos are appropriate for all hospitalized patients and support rather than detract from patient-doctor relationships, this study and others like it show that video decision aids are potent tools to promote better decision-making and higher value, more efficient care.
Disclosures
The authors have nothing to disclose.
1. Piscator E, Hedberg P, Göransson K, Djärv T. Survival after in-hospital cardiac arrest is highly associated with the Age-combined Charlson Co-morbidity Index in a cohort study from a two-site Swedish University hospital. Resuscitation. 2016;99:79-83. PubMed
2. Jain A, Corriveau S, Quinn K, Gardhouse A, Vegas DB, You JJ. Video decision aids to assist with advance care planning: a systematic review and meta-analysis. BMJ Open. 2015;5(6):e007491. PubMed
3. Merino AM, Greiner R, Hartwig K. A randomized controlled trial of a CPR decision support video for patients admitted to the general medicine service. J Hosp Med. 2017:12(9):700-704. PubMed
4. Volandes AE, Levin TT, Slovin S, Carvajal RD, O’Reilly EM, et al. Augmenting advance care planning in poor prognosis cancer with a video decision aid: a preintervention-postintervention study. Cancer. 2012;118(17):4331-4338. PubMed
5. El-Jawahri A, Paasche-Orlow MK, Matlock D, Stevenson LW, Lewis EF, Stewart G, et al. Randomized, ontrolled trial of an advance care planning video decision support tool for patients with advanced heart failure. Circulation. 2016;134(1):52-60. PubMed
6. El-Jawahri A, Mitchell SL, Paasche-Orlow MK, Temel JS, Jackson VA, Rutledge RR, et al. A randomized controlled trial of a CPR and intubation video decision support tool for hospitalized patients. J Gen Intern Med. 2015;30(8):1071-1080. PubMed
7. Volandes AE, Ferguson LA, Davis AD, Hull NC, Green MJ, Chang Y, et al. Assessing end-of-life preferences for advanced dementia in rural patients using an educational video: a randomized controlled trial. J Palliat Med. 2011;14(2):169-177. PubMed
8. Volandes AE, Paasche-Orlow MK, Barry MJ, Gillick MR, Minaker KL, Chang Y, et al. Video decision support tool for advance care planning in dementia: randomised controlled trial. BMJ. 2009;338:b2159. PubMed
9. El-Jawahri A, Podgurski LM, Eichler AF, Plotkin SR, Temel JS, Mitchell SL, et al. Use of video to facilitate end-of-life discussions with patients with cancer: a randomized controlled trial. J Clin Oncol. 2010;28(2):305-310. PubMed
10. IOM (Institute of Medicine). Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. Washington, DC: The National Academies Press; 2015. PubMed
11. Castillo LS, Williams BA, Hooper SM, Sabatino CP, Weithorn LA, Sudore RL. Lost in translation: the unintended consequences of advance directive law on clinical care. Ann Intern Med. 2011;154(2):121-128. PubMed
12. Anderson WG, Cimino JW, Lo B. Seriously ill hospitalized patients’ perspectives on the benefits and harms of two models of hospital CPR discussions. Patient Educ Couns. 2013;93(3):633-640. PubMed
13. Sudore RL, Boscardin J, Feuz MA, McMahan RD, Katen MT, Barnes DE. Effect of the PREPARE website vs an easy-to-read advance directive on advance care planning documentation and engagement among veterans: a randomized clinical trial [published online ahead of print May 18, 2017]. JAMA Intern Med. 2017; May 18. doi: 10.1001/jamainternmed.20171607. PubMed
14. Improving Communication about Serious Illness: Implementation Toolkit. SHM Center for Quality Improvement. Society of Hospital Medicine. 2017. http://www.hospitalmedicine.org/Web/Quality___Innovation/Implementation_Toolkit/EOL/Palliative_Care_Home_Society_of_Hospital_Medicine.aspx. Accessed June 13, 2017.
There is no doubt about the importance of assessing, documenting, and honoring patient wishes regarding care. For hospitalized patients, code status is a critical treatment preference to document given that the need for cardiopulmonary resuscitation (CPR) arises suddenly, outcomes are often poor, and the default is for patients to receive the treatment unless they actively decline it. Hospitalists are expected to document code status for every hospitalized patient, but admission code status conversations are often brief—and that might be all right. A code status discussion for a 70-year-old man with no chronic medical problems and excellent functional status who has been admitted for pain after a motor vehicle accident may require only an introduction to the concept of advance care planning, the importance of having a surrogate, and confirmation of full code status. On the other hand, a 45-year-old woman with metastatic pancreatic cancer would likely benefit from a family meeting in which the hospitalist could review her disease course and prognosis, assess her values and priorities in the context of her advanced illness, make treatment recommendations—including code status—that are consistent with her values, and elicit questions.1,2 We need to free up hospitalists from spending time discussing code status with every patient so that they can spend more time in quality goals of care discussions with seriously ill patients. The paradigm of the one doctor—one patient admission code status conversation for every patient is no longer realistic.
As reported by Merino and colleagues in this issue of JHM, video decision aids about CPR for hospitalized patients can offer an innovative solution to determining code status for hospitalized patients.3 The authors conducted a prospective, randomized controlled trial, which enrolled older adults admitted to the hospital medicine service at the Veteran’s Administration (VA) Hospital in Minneapolis. Participants (N = 119) were randomized to usual care or to watch a 6-minute video that explained code status options, used a mannequin to illustrate a mock code, and provided information about potential complications and survival rates. Patients who watched the video were more likely to choose do not resuscitate/do not intubate status, with a large effect size (56% in the intervention group vs. 17% in the control group, P < 0.00001).
This study adds to a growing body of literature about this powerful modality to assist with advanced care planning. Over the past 10 years, studies—conducted primarily by Volandes, El-Jawahri, and colleagues—have demonstrated how video decision aids impact the care that patients want in the setting of cancer, heart failure, serious illness with short prognosis, and future dementia.4-9 This literature has also shown that video decision aids can increase patients’ knowledge about CPR and increase the stability of decisions over time. Further, video decision aids have been well accepted by patients, who report that they would recommended such videos to others. This body of evidence underscores the potential of video decision aids to improve concordance between patient preferences and care provided, which is key given the longstanding and widespread concern about patients receiving care that is inconsistent with their values at the end of life.10 In short, video decision aids work.
Merino and colleagues are the first to examine the use of a video decision aid about code status in a general population of older adults on a hospital medicine service and the second to integrate such a video into usual inpatient care, which are important advancements.2,3 There are several issues that warrant further consideration prior to widely disseminating such a video, however. As the authors note, the participants in this VA study were overwhelmingly white men and their average age was 75. Further, the authors found a nonsignificant trend towards patients in the intervention group having less trust that “my doctors and healthcare team want what is best for me” (76% in the intervention group vs. 93% in the control group; P = 0.083). Decision making about life-sustaining therapies and reactions to communication about serious illness are heavily influenced by cultural and socioeconomic factors, including health literacy.11 It will be important to seek feedback from a diverse group of patients and families to ensure that the video decision aid is interpreted accurately, renders decisions that are consistent with patients’ values, and does not negatively impact the clinician-patient relationship.12 Additionally, as the above cases illustrate, code status discussions should be tailored to patient factors, including illness severity and point in the disease course. Hospitalists will ultimately benefit from having access to multiple different videos about a range of advance care planning topics that can be used when appropriate.
In addition to selecting the right video for the right patient, the next challenge for hospitalists and health systems will be how to implement them within real-world clinical care and a broader approach to advance care planning. There are technical and logistical challenges to displaying videos in hospital rooms, and more significant challenges in ensuring timely follow-up discussions, communication of patients’ dynamic care preferences to their surrogates, changes to inpatient orders, documentation in the electronic medical record where it can be easily found in the future, and completion of advance directives and Physician Orders for Life Sustaining Treatment forms to communicate patients’ goals of care beyond the hospital and health system. Each of these steps is critical and is supported through videos and activities in the free, patient-facing, PREPARE web-based tool (https://www.prepareforyourcare.org/).2,13,14
The ubiquitous presence of videos in our lives speaks to their power to engage and affect us. Video decision aids provide detailed explanations and vivid images that convey more than words can alone. While there is more work to be done to ensure videos are appropriate for all hospitalized patients and support rather than detract from patient-doctor relationships, this study and others like it show that video decision aids are potent tools to promote better decision-making and higher value, more efficient care.
Disclosures
The authors have nothing to disclose.
There is no doubt about the importance of assessing, documenting, and honoring patient wishes regarding care. For hospitalized patients, code status is a critical treatment preference to document given that the need for cardiopulmonary resuscitation (CPR) arises suddenly, outcomes are often poor, and the default is for patients to receive the treatment unless they actively decline it. Hospitalists are expected to document code status for every hospitalized patient, but admission code status conversations are often brief—and that might be all right. A code status discussion for a 70-year-old man with no chronic medical problems and excellent functional status who has been admitted for pain after a motor vehicle accident may require only an introduction to the concept of advance care planning, the importance of having a surrogate, and confirmation of full code status. On the other hand, a 45-year-old woman with metastatic pancreatic cancer would likely benefit from a family meeting in which the hospitalist could review her disease course and prognosis, assess her values and priorities in the context of her advanced illness, make treatment recommendations—including code status—that are consistent with her values, and elicit questions.1,2 We need to free up hospitalists from spending time discussing code status with every patient so that they can spend more time in quality goals of care discussions with seriously ill patients. The paradigm of the one doctor—one patient admission code status conversation for every patient is no longer realistic.
As reported by Merino and colleagues in this issue of JHM, video decision aids about CPR for hospitalized patients can offer an innovative solution to determining code status for hospitalized patients.3 The authors conducted a prospective, randomized controlled trial, which enrolled older adults admitted to the hospital medicine service at the Veteran’s Administration (VA) Hospital in Minneapolis. Participants (N = 119) were randomized to usual care or to watch a 6-minute video that explained code status options, used a mannequin to illustrate a mock code, and provided information about potential complications and survival rates. Patients who watched the video were more likely to choose do not resuscitate/do not intubate status, with a large effect size (56% in the intervention group vs. 17% in the control group, P < 0.00001).
This study adds to a growing body of literature about this powerful modality to assist with advanced care planning. Over the past 10 years, studies—conducted primarily by Volandes, El-Jawahri, and colleagues—have demonstrated how video decision aids impact the care that patients want in the setting of cancer, heart failure, serious illness with short prognosis, and future dementia.4-9 This literature has also shown that video decision aids can increase patients’ knowledge about CPR and increase the stability of decisions over time. Further, video decision aids have been well accepted by patients, who report that they would recommended such videos to others. This body of evidence underscores the potential of video decision aids to improve concordance between patient preferences and care provided, which is key given the longstanding and widespread concern about patients receiving care that is inconsistent with their values at the end of life.10 In short, video decision aids work.
Merino and colleagues are the first to examine the use of a video decision aid about code status in a general population of older adults on a hospital medicine service and the second to integrate such a video into usual inpatient care, which are important advancements.2,3 There are several issues that warrant further consideration prior to widely disseminating such a video, however. As the authors note, the participants in this VA study were overwhelmingly white men and their average age was 75. Further, the authors found a nonsignificant trend towards patients in the intervention group having less trust that “my doctors and healthcare team want what is best for me” (76% in the intervention group vs. 93% in the control group; P = 0.083). Decision making about life-sustaining therapies and reactions to communication about serious illness are heavily influenced by cultural and socioeconomic factors, including health literacy.11 It will be important to seek feedback from a diverse group of patients and families to ensure that the video decision aid is interpreted accurately, renders decisions that are consistent with patients’ values, and does not negatively impact the clinician-patient relationship.12 Additionally, as the above cases illustrate, code status discussions should be tailored to patient factors, including illness severity and point in the disease course. Hospitalists will ultimately benefit from having access to multiple different videos about a range of advance care planning topics that can be used when appropriate.
In addition to selecting the right video for the right patient, the next challenge for hospitalists and health systems will be how to implement them within real-world clinical care and a broader approach to advance care planning. There are technical and logistical challenges to displaying videos in hospital rooms, and more significant challenges in ensuring timely follow-up discussions, communication of patients’ dynamic care preferences to their surrogates, changes to inpatient orders, documentation in the electronic medical record where it can be easily found in the future, and completion of advance directives and Physician Orders for Life Sustaining Treatment forms to communicate patients’ goals of care beyond the hospital and health system. Each of these steps is critical and is supported through videos and activities in the free, patient-facing, PREPARE web-based tool (https://www.prepareforyourcare.org/).2,13,14
The ubiquitous presence of videos in our lives speaks to their power to engage and affect us. Video decision aids provide detailed explanations and vivid images that convey more than words can alone. While there is more work to be done to ensure videos are appropriate for all hospitalized patients and support rather than detract from patient-doctor relationships, this study and others like it show that video decision aids are potent tools to promote better decision-making and higher value, more efficient care.
Disclosures
The authors have nothing to disclose.
1. Piscator E, Hedberg P, Göransson K, Djärv T. Survival after in-hospital cardiac arrest is highly associated with the Age-combined Charlson Co-morbidity Index in a cohort study from a two-site Swedish University hospital. Resuscitation. 2016;99:79-83. PubMed
2. Jain A, Corriveau S, Quinn K, Gardhouse A, Vegas DB, You JJ. Video decision aids to assist with advance care planning: a systematic review and meta-analysis. BMJ Open. 2015;5(6):e007491. PubMed
3. Merino AM, Greiner R, Hartwig K. A randomized controlled trial of a CPR decision support video for patients admitted to the general medicine service. J Hosp Med. 2017:12(9):700-704. PubMed
4. Volandes AE, Levin TT, Slovin S, Carvajal RD, O’Reilly EM, et al. Augmenting advance care planning in poor prognosis cancer with a video decision aid: a preintervention-postintervention study. Cancer. 2012;118(17):4331-4338. PubMed
5. El-Jawahri A, Paasche-Orlow MK, Matlock D, Stevenson LW, Lewis EF, Stewart G, et al. Randomized, ontrolled trial of an advance care planning video decision support tool for patients with advanced heart failure. Circulation. 2016;134(1):52-60. PubMed
6. El-Jawahri A, Mitchell SL, Paasche-Orlow MK, Temel JS, Jackson VA, Rutledge RR, et al. A randomized controlled trial of a CPR and intubation video decision support tool for hospitalized patients. J Gen Intern Med. 2015;30(8):1071-1080. PubMed
7. Volandes AE, Ferguson LA, Davis AD, Hull NC, Green MJ, Chang Y, et al. Assessing end-of-life preferences for advanced dementia in rural patients using an educational video: a randomized controlled trial. J Palliat Med. 2011;14(2):169-177. PubMed
8. Volandes AE, Paasche-Orlow MK, Barry MJ, Gillick MR, Minaker KL, Chang Y, et al. Video decision support tool for advance care planning in dementia: randomised controlled trial. BMJ. 2009;338:b2159. PubMed
9. El-Jawahri A, Podgurski LM, Eichler AF, Plotkin SR, Temel JS, Mitchell SL, et al. Use of video to facilitate end-of-life discussions with patients with cancer: a randomized controlled trial. J Clin Oncol. 2010;28(2):305-310. PubMed
10. IOM (Institute of Medicine). Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. Washington, DC: The National Academies Press; 2015. PubMed
11. Castillo LS, Williams BA, Hooper SM, Sabatino CP, Weithorn LA, Sudore RL. Lost in translation: the unintended consequences of advance directive law on clinical care. Ann Intern Med. 2011;154(2):121-128. PubMed
12. Anderson WG, Cimino JW, Lo B. Seriously ill hospitalized patients’ perspectives on the benefits and harms of two models of hospital CPR discussions. Patient Educ Couns. 2013;93(3):633-640. PubMed
13. Sudore RL, Boscardin J, Feuz MA, McMahan RD, Katen MT, Barnes DE. Effect of the PREPARE website vs an easy-to-read advance directive on advance care planning documentation and engagement among veterans: a randomized clinical trial [published online ahead of print May 18, 2017]. JAMA Intern Med. 2017; May 18. doi: 10.1001/jamainternmed.20171607. PubMed
14. Improving Communication about Serious Illness: Implementation Toolkit. SHM Center for Quality Improvement. Society of Hospital Medicine. 2017. http://www.hospitalmedicine.org/Web/Quality___Innovation/Implementation_Toolkit/EOL/Palliative_Care_Home_Society_of_Hospital_Medicine.aspx. Accessed June 13, 2017.
1. Piscator E, Hedberg P, Göransson K, Djärv T. Survival after in-hospital cardiac arrest is highly associated with the Age-combined Charlson Co-morbidity Index in a cohort study from a two-site Swedish University hospital. Resuscitation. 2016;99:79-83. PubMed
2. Jain A, Corriveau S, Quinn K, Gardhouse A, Vegas DB, You JJ. Video decision aids to assist with advance care planning: a systematic review and meta-analysis. BMJ Open. 2015;5(6):e007491. PubMed
3. Merino AM, Greiner R, Hartwig K. A randomized controlled trial of a CPR decision support video for patients admitted to the general medicine service. J Hosp Med. 2017:12(9):700-704. PubMed
4. Volandes AE, Levin TT, Slovin S, Carvajal RD, O’Reilly EM, et al. Augmenting advance care planning in poor prognosis cancer with a video decision aid: a preintervention-postintervention study. Cancer. 2012;118(17):4331-4338. PubMed
5. El-Jawahri A, Paasche-Orlow MK, Matlock D, Stevenson LW, Lewis EF, Stewart G, et al. Randomized, ontrolled trial of an advance care planning video decision support tool for patients with advanced heart failure. Circulation. 2016;134(1):52-60. PubMed
6. El-Jawahri A, Mitchell SL, Paasche-Orlow MK, Temel JS, Jackson VA, Rutledge RR, et al. A randomized controlled trial of a CPR and intubation video decision support tool for hospitalized patients. J Gen Intern Med. 2015;30(8):1071-1080. PubMed
7. Volandes AE, Ferguson LA, Davis AD, Hull NC, Green MJ, Chang Y, et al. Assessing end-of-life preferences for advanced dementia in rural patients using an educational video: a randomized controlled trial. J Palliat Med. 2011;14(2):169-177. PubMed
8. Volandes AE, Paasche-Orlow MK, Barry MJ, Gillick MR, Minaker KL, Chang Y, et al. Video decision support tool for advance care planning in dementia: randomised controlled trial. BMJ. 2009;338:b2159. PubMed
9. El-Jawahri A, Podgurski LM, Eichler AF, Plotkin SR, Temel JS, Mitchell SL, et al. Use of video to facilitate end-of-life discussions with patients with cancer: a randomized controlled trial. J Clin Oncol. 2010;28(2):305-310. PubMed
10. IOM (Institute of Medicine). Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. Washington, DC: The National Academies Press; 2015. PubMed
11. Castillo LS, Williams BA, Hooper SM, Sabatino CP, Weithorn LA, Sudore RL. Lost in translation: the unintended consequences of advance directive law on clinical care. Ann Intern Med. 2011;154(2):121-128. PubMed
12. Anderson WG, Cimino JW, Lo B. Seriously ill hospitalized patients’ perspectives on the benefits and harms of two models of hospital CPR discussions. Patient Educ Couns. 2013;93(3):633-640. PubMed
13. Sudore RL, Boscardin J, Feuz MA, McMahan RD, Katen MT, Barnes DE. Effect of the PREPARE website vs an easy-to-read advance directive on advance care planning documentation and engagement among veterans: a randomized clinical trial [published online ahead of print May 18, 2017]. JAMA Intern Med. 2017; May 18. doi: 10.1001/jamainternmed.20171607. PubMed
14. Improving Communication about Serious Illness: Implementation Toolkit. SHM Center for Quality Improvement. Society of Hospital Medicine. 2017. http://www.hospitalmedicine.org/Web/Quality___Innovation/Implementation_Toolkit/EOL/Palliative_Care_Home_Society_of_Hospital_Medicine.aspx. Accessed June 13, 2017.
© 2017 Society of Hospital Medicine



