FDA authorizes CDC’s test for Zika virus

Article Type
Changed
Mon, 02/29/2016 - 06:00
Display Headline
FDA authorizes CDC’s test for Zika virus

Blood samples

Photo by Jeremy L. Grisham

The US Food and Drug Administration (FDA) has issued an Emergency Use Authorization (EUA) for a laboratory test used to detect the Zika virus.

The test, which was developed by the Centers for Disease Control and Prevention (CDC), is called the Zika IgM Antibody Capture Enzyme-Linked Immunosorbent Assay (Zika MAC-ELISA).

It will be distributed to certain laboratories in the US and abroad over the next 2 weeks.

The test will not be available in US hospitals or other primary care settings.

The Zika MAC-ELISA test can detect human immunoglobulin M (IgM) antibodies to the Zika virus. These antibodies appear in the blood of an infected person 4 to 5 days after the start of the illness, and they remain in the blood for about 12 weeks.

The Zika MAC-ELISA test is intended for use in sera or cerebrospinal fluid when submitted with a patient-matched serum sample from individuals meeting CDC Zika clinical and epidemiological criteria for testing. This includes people with a history of symptoms associated with Zika and/or people who have recently traveled to an area that has active Zika transmission.

Results of Zika MAC-ELISA tests require careful interpretation. The test can give false-positive results when someone has been infected with a virus closely related to Zika (such as dengue virus).

When positive or inconclusive results occur, additional testing (plaque reduction neutralization test) to confirm the presence of antibodies to Zika virus will be performed by the CDC or a CDC-authorized laboratory.

Moreover, a negative test result does not necessarily mean a person has not been infected with Zika virus. If a sample is collected just after a person becomes ill, there may not be enough IgM antibodies for the test to measure, resulting in a false-negative.

Similarly, if the sample was collected more than 12 weeks after illness, it is possible that the body has successfully fought the virus and IgM antibody levels have dropped below the detectable limit.

The CDC will begin distributing the Zika MAC-ELISA test over the next 2 weeks to qualified laboratories in the Laboratory Response Network, an integrated network of domestic and international laboratories that can respond to public health emergencies.

About the EUA

An EUA allows the use of unapproved medical products or unapproved uses of approved medical products in an emergency. The products must be used to diagnose, treat, or prevent serious or life-threatening conditions caused by chemical, biological, radiological, or nuclear threat agents, when there are no adequate alternatives.

As there are no commercially available diagnostic tests approved by the FDA for the detection of Zika virus infection, the agency decided an EUA was crucial to ensure timely access to a diagnostic tool.

The FDA issued the EUA for the Zika MAC-ELISA test based on data submitted by the CDC and on the US Secretary of Health and Human Services’ declaration that circumstances exist to justify the emergency use of in vitro diagnostic tests for the detection of Zika virus and Zika virus infection. This EUA will end when the Secretary’s declaration ends, unless the FDA revokes it sooner.

Publications
Topics

Blood samples

Photo by Jeremy L. Grisham

The US Food and Drug Administration (FDA) has issued an Emergency Use Authorization (EUA) for a laboratory test used to detect the Zika virus.

The test, which was developed by the Centers for Disease Control and Prevention (CDC), is called the Zika IgM Antibody Capture Enzyme-Linked Immunosorbent Assay (Zika MAC-ELISA).

It will be distributed to certain laboratories in the US and abroad over the next 2 weeks.

The test will not be available in US hospitals or other primary care settings.

The Zika MAC-ELISA test can detect human immunoglobulin M (IgM) antibodies to the Zika virus. These antibodies appear in the blood of an infected person 4 to 5 days after the start of the illness, and they remain in the blood for about 12 weeks.

The Zika MAC-ELISA test is intended for use in sera or cerebrospinal fluid when submitted with a patient-matched serum sample from individuals meeting CDC Zika clinical and epidemiological criteria for testing. This includes people with a history of symptoms associated with Zika and/or people who have recently traveled to an area that has active Zika transmission.

Results of Zika MAC-ELISA tests require careful interpretation. The test can give false-positive results when someone has been infected with a virus closely related to Zika (such as dengue virus).

When positive or inconclusive results occur, additional testing (plaque reduction neutralization test) to confirm the presence of antibodies to Zika virus will be performed by the CDC or a CDC-authorized laboratory.

Moreover, a negative test result does not necessarily mean a person has not been infected with Zika virus. If a sample is collected just after a person becomes ill, there may not be enough IgM antibodies for the test to measure, resulting in a false-negative.

Similarly, if the sample was collected more than 12 weeks after illness, it is possible that the body has successfully fought the virus and IgM antibody levels have dropped below the detectable limit.

The CDC will begin distributing the Zika MAC-ELISA test over the next 2 weeks to qualified laboratories in the Laboratory Response Network, an integrated network of domestic and international laboratories that can respond to public health emergencies.

About the EUA

An EUA allows the use of unapproved medical products or unapproved uses of approved medical products in an emergency. The products must be used to diagnose, treat, or prevent serious or life-threatening conditions caused by chemical, biological, radiological, or nuclear threat agents, when there are no adequate alternatives.

As there are no commercially available diagnostic tests approved by the FDA for the detection of Zika virus infection, the agency decided an EUA was crucial to ensure timely access to a diagnostic tool.

The FDA issued the EUA for the Zika MAC-ELISA test based on data submitted by the CDC and on the US Secretary of Health and Human Services’ declaration that circumstances exist to justify the emergency use of in vitro diagnostic tests for the detection of Zika virus and Zika virus infection. This EUA will end when the Secretary’s declaration ends, unless the FDA revokes it sooner.

Blood samples

Photo by Jeremy L. Grisham

The US Food and Drug Administration (FDA) has issued an Emergency Use Authorization (EUA) for a laboratory test used to detect the Zika virus.

The test, which was developed by the Centers for Disease Control and Prevention (CDC), is called the Zika IgM Antibody Capture Enzyme-Linked Immunosorbent Assay (Zika MAC-ELISA).

It will be distributed to certain laboratories in the US and abroad over the next 2 weeks.

The test will not be available in US hospitals or other primary care settings.

The Zika MAC-ELISA test can detect human immunoglobulin M (IgM) antibodies to the Zika virus. These antibodies appear in the blood of an infected person 4 to 5 days after the start of the illness, and they remain in the blood for about 12 weeks.

The Zika MAC-ELISA test is intended for use in sera or cerebrospinal fluid when submitted with a patient-matched serum sample from individuals meeting CDC Zika clinical and epidemiological criteria for testing. This includes people with a history of symptoms associated with Zika and/or people who have recently traveled to an area that has active Zika transmission.

Results of Zika MAC-ELISA tests require careful interpretation. The test can give false-positive results when someone has been infected with a virus closely related to Zika (such as dengue virus).

When positive or inconclusive results occur, additional testing (plaque reduction neutralization test) to confirm the presence of antibodies to Zika virus will be performed by the CDC or a CDC-authorized laboratory.

Moreover, a negative test result does not necessarily mean a person has not been infected with Zika virus. If a sample is collected just after a person becomes ill, there may not be enough IgM antibodies for the test to measure, resulting in a false-negative.

Similarly, if the sample was collected more than 12 weeks after illness, it is possible that the body has successfully fought the virus and IgM antibody levels have dropped below the detectable limit.

The CDC will begin distributing the Zika MAC-ELISA test over the next 2 weeks to qualified laboratories in the Laboratory Response Network, an integrated network of domestic and international laboratories that can respond to public health emergencies.

About the EUA

An EUA allows the use of unapproved medical products or unapproved uses of approved medical products in an emergency. The products must be used to diagnose, treat, or prevent serious or life-threatening conditions caused by chemical, biological, radiological, or nuclear threat agents, when there are no adequate alternatives.

As there are no commercially available diagnostic tests approved by the FDA for the detection of Zika virus infection, the agency decided an EUA was crucial to ensure timely access to a diagnostic tool.

The FDA issued the EUA for the Zika MAC-ELISA test based on data submitted by the CDC and on the US Secretary of Health and Human Services’ declaration that circumstances exist to justify the emergency use of in vitro diagnostic tests for the detection of Zika virus and Zika virus infection. This EUA will end when the Secretary’s declaration ends, unless the FDA revokes it sooner.

Publications
Publications
Topics
Article Type
Display Headline
FDA authorizes CDC’s test for Zika virus
Display Headline
FDA authorizes CDC’s test for Zika virus
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Predicting Readmissions from EHR Data

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Predicting all‐cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison

Unplanned hospital readmissions are frequent, costly, and potentially avoidable.[1, 2] Due to major federal financial readmissions penalties targeting excessive 30‐day readmissions, there is increasing attention to implementing hospital‐initiated interventions to reduce readmissions.[3, 4] However, universal enrollment of all hospitalized patients into such programs may be too resource intensive for many hospitals.[5] To optimize efficiency and effectiveness, interventions should be targeted to individuals most likely to benefit.[6, 7] However, existing readmission risk‐prediction models have achieved only modest discrimination, have largely used administrative claims data not available until months after discharge, or are limited to only a subset of patients with Medicare or a specific clinical condition.[8, 9, 10, 11, 12, 13, 14] These limitations have precluded accurate identification of high‐risk individuals in an all‐payer general medical inpatient population to provide actionable information for intervention prior to discharge.

Approaches using electronic health record (EHR) data could allow early identification of high‐risk patients during the index hospitalization to enable initiation of interventions prior to discharge. To date, such strategies have relied largely on EHR data from the day of admission.[15, 16] However, given that variation in 30‐day readmission rates are thought to reflect the quality of in‐hospital care, incorporating EHR data from the entire hospital stay to reflect hospital care processes and clinical trajectory may more accurately identify at‐risk patients.[17, 18, 19, 20] Improved accuracy in risk prediction would help better target intervention efforts in the immediate postdischarge period, an interval characterized by heightened vulnerability for adverse events.[21]

To help hospitals target transitional care interventions more effectively to high‐risk individuals prior to discharge, we derived and validated a readmissions risk‐prediction model incorporating EHR data from the entire course of the index hospitalization, which we termed the full‐stay EHR model. We also compared the full‐stay EHR model performance to our group's previously derived prediction model based on EHR data on the day of admission, termed the first‐day EHR model, as well as to 2 other validated readmission models similarly intended to yield near real‐time risk predictions prior to or shortly after hospital discharge.[9, 10, 15]

METHODS

Study Design, Population, and Data Sources

We conducted an observational cohort study using EHR data from 6 hospitals in the DallasFort Worth metroplex between November 1, 2009 and October 30, 2010 using the same EHR system (Epic Systems Corp., Verona, WI). One site was a university‐affiliated safety net hospital; the remaining 5 sites were teaching and nonteaching community sites.

We included consecutive hospitalizations among adults 18 years old discharged alive from any medicine inpatient service. For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization, were transferred to another acute care facility, left against medical advice, or who died outside of the hospital within 30 days of discharge. For model derivation, we randomly split the sample into separate derivation (50%) and validation cohorts (50%).

Outcomes

The primary outcome was 30‐day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100‐mile radius of Dallas, ascertained from an all‐payer regional hospitalization database. Nonelective hospitalizations included all hospitalizations classified as a emergency, urgent, or trauma, and excluded those classified as elective as per the Centers for Medicare and Medicaid Services Claim Inpatient Admission Type Code definitions.

Predictor Variables for the Full‐Stay EHR Model

The full‐stay EHR model was iteratively developed from our group's previously derived and validated risk‐prediction model using EHR data available on admission (first‐day EHR model).[15] For the full‐stay EHR model, we included all predictor variables included in our published first‐day EHR model as candidate risk factors. Based on prior literature, we additionally expanded candidate predictors available on admission to include marital status (proxy for social isolation) and socioeconomic disadvantage (percent poverty, unemployment, median income, and educational attainment by zip code of residence as proxy measures of the social and built environment).[22, 23, 24, 25, 26, 27] We also expanded the ascertainment of prior hospitalization to include admissions at both the index hospital and any of 75 acute care hospitals from the same, separate all‐payer regional hospitalization database used to ascertain 30‐day readmissions.

Candidate predictors from the remainder of the hospital stay (ie, following the first 24 hours of admission) were included if they were: (1) available in the EHR of all participating hospitals, (2) routinely collected or available at the time of hospital discharge, and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise. These included length of stay, in‐hospital complications, transfer to an intensive or coronary care unit, blood transfusions, vital sign instabilities within 24 hours of discharge, select laboratory values at time of discharge, and disposition status. We also assessed trajectories of vital signs and selected laboratory values (defined as changes in these measures from admission to discharge).

Statistical Analysis

Model Derivation

Univariate relationships between readmission and each of the candidate predictors were assessed in the derivation cohort using a prespecified significance threshold of P 0.05. We included all factors from our previously derived and validated first‐day EHR model as candidate predictors.[15] Continuous laboratory and vital sign values at the time of discharge were categorized based on clinically meaningful cutoffs; predictors with missing values were assumed to be normal (<1% missing for each variable). Significant univariate candidate variables were entered in a multivariate logistic regression model using stepwise backward selection with a prespecified significance threshold of P 0.05. We performed several sensitivity analyses to confirm the robustness of our model. First, we alternately derived the full‐stay model using stepwise forward selection. Second, we forced in all significant variables from our first‐day EHR model, and entered the candidate variables from the remainder of the hospital stay using both stepwise backward and forward selection separately. Third, prespecified interactions between variables were evaluated for inclusion. Though final predictors varied slightly between the different approaches, discrimination of each model was similar to the model derived using our primary analytic approach (C statistics 0.01, data not shown).

Model Validation

We assessed model discrimination and calibration of the derived full‐stay EHR model using the validation cohort. Model discrimination was estimated by the C statistic. The C statistic represents the probability that, given 2 hospitalized individuals (1 who was readmitted and the other who was not), the model will predict a higher risk for the readmitted patient than for the nonreadmitted patient. Model calibration was assessed by comparing predicted to observed probabilities of readmission by quintiles of risk, and with the Hosmer‐Lemeshow goodness‐of‐fit test.

Comparison to Existing Models

We compared the full‐stay EHR model performance to 3 previously published models: our group's first‐day EHR model, and the LACE (includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year) and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay) models, which were both derived to predict 30‐day readmissions among general medical inpatients and were intended to help clinicians identify high‐risk patients to target for discharge interventions.[9, 10, 15] We assessed each model's performance in our validation cohort, calculating the C statistic, integrated discrimination index (IDI), and net reclassification index (NRI) compared to the full‐stay model. IDI is a summary measure of both discrimination and reclassification, where more positive values suggest improvement in model performance in both these domains compared to a reference model.[28] The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.[29] The theoretical range of values is 2 to 2, with more positive values indicating improved net reclassification compared to a reference model. Here, we calculated a category‐based NRI to evaluate the performance of models in correctly classifying individuals with and without readmissions into the highest readmission risk quintile versus the lowest 4 risk quintiles compared to the full‐stay EHR model.[29] This prespecified cutoff is relevant for hospitals interested in identifying the highest‐risk individuals for targeted intervention.[6] Because some hospitals may be able to target a greater number of individuals for intervention, we performed a sensitivity analysis by assessing category‐based NRI for reclassification into the top 2 risk quintiles versus the lowest 3 risk quintiles and found no meaningful difference in our results (data not shown). Finally, we qualitatively assessed calibration of comparator models in our validation cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, TX). This study was approved by the UT Southwestern Medical Center institutional review board.

RESULTS

Overall, 32,922 index hospitalizations were included in our study cohort; 12.7% resulted in a 30‐day readmission (see Supporting Figure 1 in the online version of this article). Individuals had a mean age of 62 years and had diverse race/ethnicity and primary insurance status; half were female (Table 1). The study sample was randomly split into a derivation cohort (50%, n = 16,492) and validation cohort (50%, n = 16,430). Individuals in the derivation cohort with a 30‐day readmission had markedly different socioeconomic and clinical characteristics compared to those not readmitted (Table 1).

Baseline Characteristics and Candidate Variables for Risk‐Prediction Model
Entire Cohort, N = 32,922 Derivation Cohort, N = 16,492

No Readmission, N = 14,312

Readmission, N = 2,180

P Value
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit; IQR, interquartile range; SD, standard deviation. *20% poverty in zip code as per high poverty area US Census designation. Prior ED visit at site of index hospitalization within the past year. Prior hospitalization at any of 75 acute care hospitals in the North Texas region within the past year. Nonelective admission defined as hospitalization categorized as medical emergency, urgent, or trauma. ∥Calculated from diagnoses available within 1 year prior to index hospitalization. Conditions were considered complications if they were not listed as a principle diagnosis for hospitalization or as a previous diagnosis in the prior year. #On day of discharge or last known observation before discharge. Instabilities were defined as temperature 37.8C, heart rate >100 beats/minute, respiratory rate >24 breaths/minute, systolic blood pressure 90 mm Hg, or oxygen saturation <90%. **Discharges to nursing home, skilled nursing facility, or long‐term acute care hospital.

Demographic characteristics
Age, y, mean (SD) 62 (17.3) 61 (17.4) 64 (17.0) 0.001
Female, n (%) 17,715 (53.8) 7,694 (53.8) 1,163 (53.3) 0.72
Race/ethnicity 0.001
White 21,359 (64.9) 9,329 (65.2) 1,361 (62.4)
Black 5,964 (18.1) 2,520 (17.6) 434 (19.9)
Hispanic 4,452 (13.5) 1,931 (13.5) 338 (15.5)
Other 1,147 (3.5) 532 (3.7) 47 (2.2)
Marital status, n (%) 0.001
Single 8,076 (24.5) 3,516 (24.6) 514 (23.6)
Married 13,394 (40.7) 5,950 (41.6) 812 (37.3)
Separated/divorced 3,468 (10.5) 1,460 (10.2) 251 (11.5)
Widowed 4,487 (13.7) 1,868 (13.1) 388 (17.8)
Other 3,497 (10.6) 1,518 (10.6) 215 (9.9)
Primary payer, n (%) 0.001
Private 13,090 (39.8) 5,855 (40.9) 726 (33.3)
Medicare 13,015 (39.5) 5,597 (39.1) 987 (45.3)
Medicaid 2,204 (6.7) 852 (5.9) 242 (11.1)
Charity, self‐pay, or other 4,613 (14.0) 2,008 (14.0) 225 (10.3)
High‐poverty neighborhood, n (%)* 7,468 (22.7) 3,208 (22.4) 548 (25.1) 0.001
Utilization history
1 ED visits in past year, n (%) 9,299 (28.2) 3,793 (26.5) 823 (37.8) 0.001
1 hospitalizations in past year, n (%) 10,189 (30.9) 4,074 (28.5) 1,012 (46.4) 0.001
Clinical factors from first day of hospitalization
Nonelective admission, n (%) 27,818 (84.5) 11,960 (83.6) 1,960 (89.9) 0.001
Charlson Comorbidity Index, median (IQR)∥ 0 (01) 0 (00) 0 (03) 0.001
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 355 (1.1) 119 (0.8) 46 (2.1) 0.001
Albumin 23 g/dL 4,732 (14.4) 1,956 (13.7) 458 (21.0) 0.001
Aspartate aminotransferase >40 U/L 4,610 (14.0) 1,922 (13.4) 383 (17.6) 0.001
Creatine phosphokinase <60 g/L 3,728 (11.3) 1,536 (10.7) 330 (15.1) 0.001
Mean corpuscular volume >100 fL/red cell 1,346 (4.1) 537 (3.8) 134 (6.2) 0.001
Platelets <90 103/L 912 (2.8) 357 (2.5) 116 (5.3) 0.001
Platelets >350 103/L 3,332 (10.1) 1,433 (10.0) 283 (13.0) 0.001
Prothrombin time >35 seconds 248 (0.8) 90 (0.6) 35 (1.6) 0.001
Clinical factors from remainder of hospital stay
Length of stay, d, median (IQR) 4 (26) 4 (26) 5 (38) 0.001
ICU transfer after first 24 hours, n (%) 988 (3.0) 408 (2.9) 94 (4.3) 0.001
Hospital complications, n (%)
Clostridium difficile infection 119 (0.4) 44 (0.3) 24 (1.1) 0.001
Pressure ulcer 358 (1.1) 126 (0.9) 46 (2.1) 0.001
Venous thromboembolism 301 (0.9) 112 (0.8) 34 (1.6) 0.001
Respiratory failure 1,048 (3.2) 463 (3.2) 112 (5.1) 0.001
Central line‐associated bloodstream infection 22 (0.07) 6 (0.04) 5 (0.23) 0.005
Catheter‐associated urinary tract infection 47 (0.14) 20 (0.14) 6 (0.28) 0.15
Acute myocardial infarction 293 (0.9) 110 (0.8) 32 (1.5) 0.001
Pneumonia 1,754 (5.3) 719 (5.0) 154 (7.1) 0.001
Sepsis 853 (2.6) 368 (2.6) 73 (3.4) 0.04
Blood transfusion during hospitalization, n (%) 4,511 (13.7) 1,837 (12.8) 425 (19.5) 0.001
Laboratory abnormalities at discharge#
Blood urea nitrogen >20 mg/dL, n (%) 10,014 (30.4) 4,077 (28.5) 929 (42.6) 0.001
Sodium <135 mEq/L, n (%) 4,583 (13.9) 1,850 (12.9) 440 (20.2) 0.001
Hematocrit 27 3,104 (9.4) 1,231 (8.6) 287 (13.2) 0.001
1 vital sign instability at discharge, n (%)# 6,192 (18.8) 2,624 (18.3) 525 (24.1) 0.001
Discharge location, n (%) 0.001
Home 23,339 (70.9) 10,282 (71.8) 1,383 (63.4)
Home health 3,185 (9.7) 1,356 (9.5) 234 (10.7)
Postacute care** 5,990 (18.2) 2,496 (17.4) 549 (25.2)
Hospice 408 (1.2) 178 (1.2) 14 (0.6)

Derivation and Validation of the Full‐Stay EHR Model for 30‐Day Readmission

Our final model included 24 independent variables, including demographic characteristics, utilization history, clinical factors from the first day of admission, and clinical factors from the remainder of the hospital stay (Table 2). The strongest independent predictor of readmission was hospital‐acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI] 1.18‐3.48); other hospital‐acquired complications including pressure ulcers and venous thromboembolism were also significant predictors. Though having Medicaid was associated with increased odds of readmission (AOR: 1.55, 95% CI: 1.31‐1.83), other zip codelevel measures of socioeconomic disadvantage were not predictive and were not included in the final model. Being discharged to hospice was associated with markedly lower odds of readmission (AOR: 0.23, 95% CI: 0.13‐0.40).

Final Full‐Stay EHR Model Predicting 30‐Day Readmissions (Derivation Cohort, N = 16,492)
Odds Ratio (95% CI)
Univariate Multivariate*
  • NOTE: Abbreviations: CI, confidence interval; ED, emergency department. *Values shown reflect adjusted odds ratios and 95% CI for each factor after adjustment for all other factors listed in the table.

Demographic characteristics
Age, per 10 years 1.08 (1.051.11) 1.07 (1.041.10)
Medicaid 1.97 (1.702.29) 1.55 (1.311.83)
Widow 1.44 (1.281.63) 1.27 (1.111.45)
Utilization history
Prior ED visit, per visit 1.08 (1.061.10) 1.04 (1.021.06)
Prior hospitalization, per hospitalization 1.30 (1.271.34) 1.16 (1.121.20)
Hospital and clinical factors from first day of hospitalization
Nonelective admission 1.75 (1.512.03) 1.42 (1.221.65)
Charlson Comorbidity Index, per point 1.19 (1.171.21) 1.06 (1.041.09)
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 2.57 (1.823.62) 1.52 (1.052.21)
Albumin 23 g/dL 1.68 (1.501.88) 1.20 (1.061.36)
Aspartate aminotransferase >40 U/L 1.37 (1.221.55) 1.21 (1.061.38)
Creatine phosphokinase <60 g/L 1.48 (1.301.69) 1.28 (1.111.46)
Mean corpuscular volume >100 fL/red cell 1.68 (1.382.04) 1.32 (1.071.62)
Platelets <90 103/L 2.20 (1.772.72) 1.56 (1.231.97)
Platelets >350 103/L 1.34 (1.171.54) 1.24 (1.081.44)
Prothrombin time >35 seconds 2.58 (1.743.82) 1.92 (1.272.90)
Hospital and clinical factors from remainder of hospital stay
Length of stay, per day 1.08 (1.071.09) 1.06 (1.041.07)
Hospital complications
Clostridium difficile infection 3.61 (2.195.95) 2.03 (1.183.48)
Pressure ulcer 2.43 (1.733.41) 1.64 (1.152.34)
Venous thromboembolism 2.01 (1.362.96) 1.55 (1.032.32)
Laboratory abnormalities at discharge
Blood urea nitrogen >20 mg/dL 1.86 (1.702.04) 1.37 (1.241.52)
Sodium <135 mEq/L 1.70 (1.521.91) 1.34 (1.181.51)
Hematocrit 27 1.61 (1.401.85) 1.22 (1.051.41)
Vital sign instability at discharge, per instability 1.29 (1.201.40) 1.25 (1.151.36)
Discharged to hospice 0.51 (0.300.89) 0.23 (0.130.40)

In our validation cohort, the full‐stay EHR model had fair discrimination, with a C statistic of 0.69 (95% CI: 0.68‐0.70) (Table 3). The full‐stay EHR model was well calibrated across all quintiles of risk, with slight overestimation of predicted risk in the lowest and highest quintiles (Figure 1a) (see Supporting Table 5 in the online version of this article). It also effectively stratified individuals across a broad range of predicted readmission risk from 4.1% in the lowest decile to 36.5% in the highest decile (Table 3).

Comparison of the Discrimination and Reclassification of Different Readmission Models*
Model Name C‐Statistic (95% CI) IDI, % (95% CI) NRI (95% CI) Average Predicted Risk, %
Lowest Decile Highest Decile
  • NOTE: Abbreviations; CI, confidence interval; EHR, electronic health record; IDI, Integrated Discrimination Improvement; NRI, Net Reclassification Index. *All measures were assessed using the validation cohort (N = 16,430), except for estimating the C‐statistic for the derivation cohort. P value <0.001 for all pairwise comparisons of C‐statistic between full‐stay model and first‐day, LACE, and HOSPITAL models, respectively. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Full‐stay EHR model
Derivation cohort 0.72 (0.70 to 0.73) 4.1 36.5
Validation cohort 0.69 (0.68 to 0.70) [Reference] [Reference] 4.1 36.5
First‐day EHR model 0.67 (0.66 to 0.68) 1.2 (1.4 to 1.0) 0.020 (0.038 to 0.002) 5.8 31.9
LACE model 0.65 (0.64 to 0.66) 2.6 (2.9 to 2.3) 0.046 (0.067 to 0.024) 6.1 27.5
HOSPITAL model 0.64 (0.62 to 0.65) 3.2 (3.5 to 2.9) 0.058 (0.080 to 0.035) 6.7 26.6
Figure 1
Comparison of the calibration of different readmission models. Calibration graphs for full‐stay (a), first‐day (b), LACE (c), and HOSPITAL (d) models in the validation cohort. Each graph shows predicted probability compared to observed probability of readmission by quintiles of risk for each model. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Comparing the Performance of the Full‐Stay EHR Model to Other Models

The full‐stay EHR model had better discrimination compared to the first‐day EHR model and the LACE and HOSPITAL models, though the magnitude of improvement was modest (Table 3). The full‐stay EHR model also stratified individuals across a broader range of readmission risk, and was better able to discriminate and classify those in the highest quintile of risk from those in the lowest 4 quintiles of risk compared to other models as assessed by the IDI and NRI (Table 3) (see Supporting Tables 14 and Supporting Figure 2 in the online version of this article). In terms of model calibration, both the first‐day EHR and LACE models were also well calibrated, whereas the HOSPITAL model was less robust (Figure 1).

The diagnostic accuracy of the full‐stay EHR model in correctly predicting those in the highest quintile of risk was better than that of the first‐day, LACE, and HOSPITAL models, though overall improvements in the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios were also modest (see Supporting Table 6 in the online version of this article).

DISCUSSION

In this study, we used clinically detailed EHR data from the entire hospitalization on 32,922 individuals treated in 6 diverse hospitals to develop an all‐payer, multicondition readmission risk‐prediction model. To our knowledge, this is the first 30‐day hospital readmission risk‐prediction model to use a comprehensive set of factors from EHR data from the entire hospital stay. Prior EHR‐based models have focused exclusively on data available on or prior to the first day of admission, which account for clinical severity on admission but do not account for factors uncovered during the inpatient stay that influence the chance of a postdischarge adverse outcome.[15, 30] We specifically assessed the prognostic impact of a comprehensive set of factors from the entire index hospitalization, including hospital‐acquired complications, clinical trajectory, and stability on discharge in predicting hospital readmissions. Our full‐stay EHR model had statistically better discrimination, calibration, and diagnostic accuracy than our existing all‐cause first‐day EHR model[15] and 2 previously published readmissions models that included more limited information from hospitalization (such as length of stay).[9, 10] However, although the more complicated full‐stay EHR model was statistically better than previously published models, we were surprised that the predictive performance was only modestly improved despite the inclusion of many additional clinically relevant prognostic factors.

Taken together, our study has several important implications. First, the added complexity and resource intensity of implementing a full‐stay EHR model yields only modestly improved readmission risk prediction. Thus, hospitals and healthcare systems interested in targeting their highest‐risk individuals for interventions to reduce 30‐day readmission should consider doing so within the first day of hospital admission. Our group's previously derived and validated first‐day EHR model, which used data only from the first day of admission, qualitatively performed nearly as well as the full‐stay EHR model.[15] Additionally, a recent study using only preadmission EHR data to predict 30‐day readmissions also achieved similar discrimination and diagnostic accuracy as our full‐stay model.[30]

Second, the field of readmissions risk‐prediction modeling may be reaching the maximum achievable model performance using data that are currently available in the EHR. Our limited ability to accurately predict all‐cause 30‐day readmission risk may reflect the influence of currently unmeasured patient, system, and community factors on readmissions.[31, 32, 33] Due to the constraints of data collected in the EHR, we were unable to include several patient‐level clinical characteristics associated with hospital readmission, including self‐perceived health status, functional impairment, and cognition.[33, 34, 35, 36] However, given their modest effect sizes (ORs ranging from 1.062.10), adequately measuring and including these risk factors in our model may not meaningfully improve model performance and diagnostic accuracy. Further, many social and behavioral patient‐level factors are also not consistently available in EHR data. Though we explored the role of several neighborhood‐level socioeconomic measuresincluding prevalence of poverty, median income, education, and unemploymentwe found that none were significantly associated with 30‐day readmissions. These particular measures may have been inadequate to characterize individual‐level social and behavioral factors, as several previous studies have demonstrated that patient‐level factors such as social support, substance abuse, and medication and visit adherence can influence readmission risk in heart failure and pneumonia.[11, 16, 22, 25] This underscores the need for more standardized routine collection of data across functional, social, and behavioral domains in clinical settings, as recently championed by the Institute of Medicine.[11, 37] Integrating data from outside the EHR on postdischarge health behaviors, self‐management, follow‐up care, recovery, and home environment may be another important but untapped strategy for further improving prediction of readmissions.[25, 38]

Third, a multicondition readmission risk‐prediction model may be a less effective strategy than more customized disease‐specific models for selected conditions associated with high 30‐day readmission rates. Our group's previously derived and internally validated models for heart failure and human immunodeficiency virus had superior discrimination compared to our full‐stay EHR model (C statistic of 0.72 for each).[11, 13] However, given differences in the included population and time periods studied, a head‐to‐head comparison of these different strategies is needed to assess differences in model performance and utility.

Our study had several strengths. To our knowledge, this is the first study to rigorously measure the additive influence of in‐hospital complications, clinical trajectory, and stability on discharge on the risk of 30‐day hospital readmission. Additionally, our study included a large, diverse study population that included all payers, all ages of adults, a mix of community, academic, and safety net hospitals, and individuals from a broad array of racial/ethnic and socioeconomic backgrounds.

Our results should be interpreted in light of several limitations. First, though we sought to represent a diverse group of hospitals, all study sites were located within north Texas and generalizability to other regions is uncertain. Second, our ascertainment of prior hospitalizations and readmissions was more inclusive than what could be typically accomplished in real time using only EHR data from a single clinical site. We performed a sensitivity analysis using only prior utilization data available within the EHR from the index hospital with no meaningful difference in our findings (data not shown). Additionally, a recent study found that 30‐day readmissions occur at the index hospital for over 75% of events, suggesting that 30‐day readmissions are fairly comprehensively captured even with only single‐site data.[39] Third, we were not able to include data on outpatient visits before or after the index hospitalization, which may influence the risk of readmission.[1, 40]

In conclusion, incorporating clinically granular EHR data from the entire course of hospitalization modestly improves prediction of 30‐day readmissions compared to models that only include information from the first 24 hours of hospital admission or models that use far fewer variables. However, given the limited improvement in prediction, our findings suggest that from the practical perspective of implementing real‐time models to identify those at highest risk for readmission, it may not be worth the added complexity of waiting until the end of a hospitalization to leverage additional data on hospital complications, and the trajectory of laboratory and vital sign values currently available in the EHR. Further improvement in prediction of readmissions will likely require accounting for psychosocial, functional, behavioral, and postdischarge factors not currently present in the inpatient EHR.

Disclosures: This study was presented at the Society of Hospital Medicine 2015 Annual Meeting in National Harbor, Maryland, and the Society of General Internal Medicine 2015 Annual Meeting in Toronto, Canada. This work was supported by the Agency for Healthcare Research and Qualityfunded UT Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01) and the Commonwealth Foundation (#20100323). Drs. Nguyen and Makam received funding from the UT Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). Dr. Halm was also supported in part by NIH/NCATS U54 RFA‐TR‐12‐006. The study sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

Files
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
  6. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  7. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):11481154.
  8. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  9. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  10. Donze J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632638.
  11. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  12. Singal AG, Rahimi RS, Clark C, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clin Gastroenterol Hepatol. 2013;11(10):13351341.e1331.
  13. Nijhawan AE, Clark C, Kaplan R, Moore B, Halm EA, Amarasingham R. An electronic medical record‐based model to predict 30‐day risk of readmission and death among HIV‐infected inpatients. J Acquir Immune Defic Syndr. 2012;61(3):349358.
  14. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital‐wide performance on 30‐day unplanned readmission. Ann Intern Med. 2014;161(10 suppl):S66S75.
  15. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record‐based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15(1):39.
  16. Watson AJ, O'Rourke J, Jethwani K, et al. Linking electronic health record‐extracted psychosocial data in real‐time to risk of readmission for heart failure. Psychosomatics. 2011;52(4):319327.
  17. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996;43(11):15331541.
  18. Dharmarajan K, Hsieh AF, Lin Z, et al. Hospital readmission performance and patterns of readmission: retrospective cohort study of Medicare admissions. BMJ. 2013;347:f6571.
  19. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):21452147.
  20. Bradley EH, Sipsma H, Horwitz LI, et al. Hospital strategy uptake and reductions in unplanned readmission rates for patients with heart failure: a prospective study. J Gen Intern Med. 2015;30(5):605611.
  21. Krumholz HM. Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100102.
  22. Calvillo‐King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269282.
  23. Keyhani S, Myers LJ, Cheng E, Hebert P, Williams LS, Bravata DM. Effect of clinical and social risk factors on hospital profiling for stroke readmission: a cohort study. Ann Intern Med. 2014;161(11):775784.
  24. Kind AJ, Jencks S, Brock J, et al. Neighborhood socioeconomic disadvantage and 30‐day rehospitalization: a retrospective cohort study. Ann Intern Med. 2014;161(11):765774.
  25. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  26. Hu J, Gonsahn MD, Nerenz DR. Socioeconomic status and readmissions: evidence from an urban teaching hospital. Health Aff (Millwood). 2014;33(5):778785.
  27. Nagasako EM, Reidhead M, Waterman B, Dunagan WC. Adding socioeconomic data to hospital readmissions calculations may produce more useful results. Health Aff (Millwood). 2014;33(5):786791.
  28. Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157172; discussion 207–212.
  29. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide. Ann Intern Med. 2014;160(2):122131.
  30. Shadmi E, Flaks‐Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30‐day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283289.
  31. Kangovi S, Grande D. Hospital readmissions—not just a measure of quality. JAMA. 2011;306(16):17961797.
  32. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  33. Greysen SR, Stijacic Cenzer I, Auerbach AD, Covinsky KE. Functional impairment and hospital readmission in medicare seniors. JAMA Intern Med. 2015;175(4):559565.
  34. Holloway JJ, Thomas JW, Shapiro L. Clinical and sociodemographic risk factors for readmission of Medicare beneficiaries. Health Care Financ Rev. 1988;10(1):2736.
  35. Patel A, Parikh R, Howell EH, Hsich E, Landers SH, Gorodeski EZ. Mini‐cog performance: novel marker of post discharge risk among patients hospitalized for heart failure. Circ Heart Fail. 2015;8(1):816.
  36. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277282.
  37. Adler NE, Stead WW. Patients in context—EHR capture of social and behavioral determinants of health. N Engl J Med. 2015;372(8):698701.
  38. Nguyen OK, Chan CV, Makam A, Stieglitz H, Amarasingham R. Envisioning a social‐health information exchange as a platform to support a patient‐centered medical neighborhood: a feasibility study. J Gen Intern Med. 2015;30(1):6067.
  39. Henke RM, Karaca Z, Lin H, Wier LM, Marder W, Wong HS. Patient factors contributing to variation in same‐hospital readmission rate. Med Care Res Review. 2015;72(3):338358.
  40. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):14411447.
Article PDF
Issue
Journal of Hospital Medicine - 11(7)
Page Number
473-480
Sections
Files
Files
Article PDF
Article PDF

Unplanned hospital readmissions are frequent, costly, and potentially avoidable.[1, 2] Due to major federal financial readmissions penalties targeting excessive 30‐day readmissions, there is increasing attention to implementing hospital‐initiated interventions to reduce readmissions.[3, 4] However, universal enrollment of all hospitalized patients into such programs may be too resource intensive for many hospitals.[5] To optimize efficiency and effectiveness, interventions should be targeted to individuals most likely to benefit.[6, 7] However, existing readmission risk‐prediction models have achieved only modest discrimination, have largely used administrative claims data not available until months after discharge, or are limited to only a subset of patients with Medicare or a specific clinical condition.[8, 9, 10, 11, 12, 13, 14] These limitations have precluded accurate identification of high‐risk individuals in an all‐payer general medical inpatient population to provide actionable information for intervention prior to discharge.

Approaches using electronic health record (EHR) data could allow early identification of high‐risk patients during the index hospitalization to enable initiation of interventions prior to discharge. To date, such strategies have relied largely on EHR data from the day of admission.[15, 16] However, given that variation in 30‐day readmission rates are thought to reflect the quality of in‐hospital care, incorporating EHR data from the entire hospital stay to reflect hospital care processes and clinical trajectory may more accurately identify at‐risk patients.[17, 18, 19, 20] Improved accuracy in risk prediction would help better target intervention efforts in the immediate postdischarge period, an interval characterized by heightened vulnerability for adverse events.[21]

To help hospitals target transitional care interventions more effectively to high‐risk individuals prior to discharge, we derived and validated a readmissions risk‐prediction model incorporating EHR data from the entire course of the index hospitalization, which we termed the full‐stay EHR model. We also compared the full‐stay EHR model performance to our group's previously derived prediction model based on EHR data on the day of admission, termed the first‐day EHR model, as well as to 2 other validated readmission models similarly intended to yield near real‐time risk predictions prior to or shortly after hospital discharge.[9, 10, 15]

METHODS

Study Design, Population, and Data Sources

We conducted an observational cohort study using EHR data from 6 hospitals in the DallasFort Worth metroplex between November 1, 2009 and October 30, 2010 using the same EHR system (Epic Systems Corp., Verona, WI). One site was a university‐affiliated safety net hospital; the remaining 5 sites were teaching and nonteaching community sites.

We included consecutive hospitalizations among adults 18 years old discharged alive from any medicine inpatient service. For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization, were transferred to another acute care facility, left against medical advice, or who died outside of the hospital within 30 days of discharge. For model derivation, we randomly split the sample into separate derivation (50%) and validation cohorts (50%).

Outcomes

The primary outcome was 30‐day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100‐mile radius of Dallas, ascertained from an all‐payer regional hospitalization database. Nonelective hospitalizations included all hospitalizations classified as a emergency, urgent, or trauma, and excluded those classified as elective as per the Centers for Medicare and Medicaid Services Claim Inpatient Admission Type Code definitions.

Predictor Variables for the Full‐Stay EHR Model

The full‐stay EHR model was iteratively developed from our group's previously derived and validated risk‐prediction model using EHR data available on admission (first‐day EHR model).[15] For the full‐stay EHR model, we included all predictor variables included in our published first‐day EHR model as candidate risk factors. Based on prior literature, we additionally expanded candidate predictors available on admission to include marital status (proxy for social isolation) and socioeconomic disadvantage (percent poverty, unemployment, median income, and educational attainment by zip code of residence as proxy measures of the social and built environment).[22, 23, 24, 25, 26, 27] We also expanded the ascertainment of prior hospitalization to include admissions at both the index hospital and any of 75 acute care hospitals from the same, separate all‐payer regional hospitalization database used to ascertain 30‐day readmissions.

Candidate predictors from the remainder of the hospital stay (ie, following the first 24 hours of admission) were included if they were: (1) available in the EHR of all participating hospitals, (2) routinely collected or available at the time of hospital discharge, and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise. These included length of stay, in‐hospital complications, transfer to an intensive or coronary care unit, blood transfusions, vital sign instabilities within 24 hours of discharge, select laboratory values at time of discharge, and disposition status. We also assessed trajectories of vital signs and selected laboratory values (defined as changes in these measures from admission to discharge).

Statistical Analysis

Model Derivation

Univariate relationships between readmission and each of the candidate predictors were assessed in the derivation cohort using a prespecified significance threshold of P 0.05. We included all factors from our previously derived and validated first‐day EHR model as candidate predictors.[15] Continuous laboratory and vital sign values at the time of discharge were categorized based on clinically meaningful cutoffs; predictors with missing values were assumed to be normal (<1% missing for each variable). Significant univariate candidate variables were entered in a multivariate logistic regression model using stepwise backward selection with a prespecified significance threshold of P 0.05. We performed several sensitivity analyses to confirm the robustness of our model. First, we alternately derived the full‐stay model using stepwise forward selection. Second, we forced in all significant variables from our first‐day EHR model, and entered the candidate variables from the remainder of the hospital stay using both stepwise backward and forward selection separately. Third, prespecified interactions between variables were evaluated for inclusion. Though final predictors varied slightly between the different approaches, discrimination of each model was similar to the model derived using our primary analytic approach (C statistics 0.01, data not shown).

Model Validation

We assessed model discrimination and calibration of the derived full‐stay EHR model using the validation cohort. Model discrimination was estimated by the C statistic. The C statistic represents the probability that, given 2 hospitalized individuals (1 who was readmitted and the other who was not), the model will predict a higher risk for the readmitted patient than for the nonreadmitted patient. Model calibration was assessed by comparing predicted to observed probabilities of readmission by quintiles of risk, and with the Hosmer‐Lemeshow goodness‐of‐fit test.

Comparison to Existing Models

We compared the full‐stay EHR model performance to 3 previously published models: our group's first‐day EHR model, and the LACE (includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year) and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay) models, which were both derived to predict 30‐day readmissions among general medical inpatients and were intended to help clinicians identify high‐risk patients to target for discharge interventions.[9, 10, 15] We assessed each model's performance in our validation cohort, calculating the C statistic, integrated discrimination index (IDI), and net reclassification index (NRI) compared to the full‐stay model. IDI is a summary measure of both discrimination and reclassification, where more positive values suggest improvement in model performance in both these domains compared to a reference model.[28] The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.[29] The theoretical range of values is 2 to 2, with more positive values indicating improved net reclassification compared to a reference model. Here, we calculated a category‐based NRI to evaluate the performance of models in correctly classifying individuals with and without readmissions into the highest readmission risk quintile versus the lowest 4 risk quintiles compared to the full‐stay EHR model.[29] This prespecified cutoff is relevant for hospitals interested in identifying the highest‐risk individuals for targeted intervention.[6] Because some hospitals may be able to target a greater number of individuals for intervention, we performed a sensitivity analysis by assessing category‐based NRI for reclassification into the top 2 risk quintiles versus the lowest 3 risk quintiles and found no meaningful difference in our results (data not shown). Finally, we qualitatively assessed calibration of comparator models in our validation cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, TX). This study was approved by the UT Southwestern Medical Center institutional review board.

RESULTS

Overall, 32,922 index hospitalizations were included in our study cohort; 12.7% resulted in a 30‐day readmission (see Supporting Figure 1 in the online version of this article). Individuals had a mean age of 62 years and had diverse race/ethnicity and primary insurance status; half were female (Table 1). The study sample was randomly split into a derivation cohort (50%, n = 16,492) and validation cohort (50%, n = 16,430). Individuals in the derivation cohort with a 30‐day readmission had markedly different socioeconomic and clinical characteristics compared to those not readmitted (Table 1).

Baseline Characteristics and Candidate Variables for Risk‐Prediction Model
Entire Cohort, N = 32,922 Derivation Cohort, N = 16,492

No Readmission, N = 14,312

Readmission, N = 2,180

P Value
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit; IQR, interquartile range; SD, standard deviation. *20% poverty in zip code as per high poverty area US Census designation. Prior ED visit at site of index hospitalization within the past year. Prior hospitalization at any of 75 acute care hospitals in the North Texas region within the past year. Nonelective admission defined as hospitalization categorized as medical emergency, urgent, or trauma. ∥Calculated from diagnoses available within 1 year prior to index hospitalization. Conditions were considered complications if they were not listed as a principle diagnosis for hospitalization or as a previous diagnosis in the prior year. #On day of discharge or last known observation before discharge. Instabilities were defined as temperature 37.8C, heart rate >100 beats/minute, respiratory rate >24 breaths/minute, systolic blood pressure 90 mm Hg, or oxygen saturation <90%. **Discharges to nursing home, skilled nursing facility, or long‐term acute care hospital.

Demographic characteristics
Age, y, mean (SD) 62 (17.3) 61 (17.4) 64 (17.0) 0.001
Female, n (%) 17,715 (53.8) 7,694 (53.8) 1,163 (53.3) 0.72
Race/ethnicity 0.001
White 21,359 (64.9) 9,329 (65.2) 1,361 (62.4)
Black 5,964 (18.1) 2,520 (17.6) 434 (19.9)
Hispanic 4,452 (13.5) 1,931 (13.5) 338 (15.5)
Other 1,147 (3.5) 532 (3.7) 47 (2.2)
Marital status, n (%) 0.001
Single 8,076 (24.5) 3,516 (24.6) 514 (23.6)
Married 13,394 (40.7) 5,950 (41.6) 812 (37.3)
Separated/divorced 3,468 (10.5) 1,460 (10.2) 251 (11.5)
Widowed 4,487 (13.7) 1,868 (13.1) 388 (17.8)
Other 3,497 (10.6) 1,518 (10.6) 215 (9.9)
Primary payer, n (%) 0.001
Private 13,090 (39.8) 5,855 (40.9) 726 (33.3)
Medicare 13,015 (39.5) 5,597 (39.1) 987 (45.3)
Medicaid 2,204 (6.7) 852 (5.9) 242 (11.1)
Charity, self‐pay, or other 4,613 (14.0) 2,008 (14.0) 225 (10.3)
High‐poverty neighborhood, n (%)* 7,468 (22.7) 3,208 (22.4) 548 (25.1) 0.001
Utilization history
1 ED visits in past year, n (%) 9,299 (28.2) 3,793 (26.5) 823 (37.8) 0.001
1 hospitalizations in past year, n (%) 10,189 (30.9) 4,074 (28.5) 1,012 (46.4) 0.001
Clinical factors from first day of hospitalization
Nonelective admission, n (%) 27,818 (84.5) 11,960 (83.6) 1,960 (89.9) 0.001
Charlson Comorbidity Index, median (IQR)∥ 0 (01) 0 (00) 0 (03) 0.001
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 355 (1.1) 119 (0.8) 46 (2.1) 0.001
Albumin 23 g/dL 4,732 (14.4) 1,956 (13.7) 458 (21.0) 0.001
Aspartate aminotransferase >40 U/L 4,610 (14.0) 1,922 (13.4) 383 (17.6) 0.001
Creatine phosphokinase <60 g/L 3,728 (11.3) 1,536 (10.7) 330 (15.1) 0.001
Mean corpuscular volume >100 fL/red cell 1,346 (4.1) 537 (3.8) 134 (6.2) 0.001
Platelets <90 103/L 912 (2.8) 357 (2.5) 116 (5.3) 0.001
Platelets >350 103/L 3,332 (10.1) 1,433 (10.0) 283 (13.0) 0.001
Prothrombin time >35 seconds 248 (0.8) 90 (0.6) 35 (1.6) 0.001
Clinical factors from remainder of hospital stay
Length of stay, d, median (IQR) 4 (26) 4 (26) 5 (38) 0.001
ICU transfer after first 24 hours, n (%) 988 (3.0) 408 (2.9) 94 (4.3) 0.001
Hospital complications, n (%)
Clostridium difficile infection 119 (0.4) 44 (0.3) 24 (1.1) 0.001
Pressure ulcer 358 (1.1) 126 (0.9) 46 (2.1) 0.001
Venous thromboembolism 301 (0.9) 112 (0.8) 34 (1.6) 0.001
Respiratory failure 1,048 (3.2) 463 (3.2) 112 (5.1) 0.001
Central line‐associated bloodstream infection 22 (0.07) 6 (0.04) 5 (0.23) 0.005
Catheter‐associated urinary tract infection 47 (0.14) 20 (0.14) 6 (0.28) 0.15
Acute myocardial infarction 293 (0.9) 110 (0.8) 32 (1.5) 0.001
Pneumonia 1,754 (5.3) 719 (5.0) 154 (7.1) 0.001
Sepsis 853 (2.6) 368 (2.6) 73 (3.4) 0.04
Blood transfusion during hospitalization, n (%) 4,511 (13.7) 1,837 (12.8) 425 (19.5) 0.001
Laboratory abnormalities at discharge#
Blood urea nitrogen >20 mg/dL, n (%) 10,014 (30.4) 4,077 (28.5) 929 (42.6) 0.001
Sodium <135 mEq/L, n (%) 4,583 (13.9) 1,850 (12.9) 440 (20.2) 0.001
Hematocrit 27 3,104 (9.4) 1,231 (8.6) 287 (13.2) 0.001
1 vital sign instability at discharge, n (%)# 6,192 (18.8) 2,624 (18.3) 525 (24.1) 0.001
Discharge location, n (%) 0.001
Home 23,339 (70.9) 10,282 (71.8) 1,383 (63.4)
Home health 3,185 (9.7) 1,356 (9.5) 234 (10.7)
Postacute care** 5,990 (18.2) 2,496 (17.4) 549 (25.2)
Hospice 408 (1.2) 178 (1.2) 14 (0.6)

Derivation and Validation of the Full‐Stay EHR Model for 30‐Day Readmission

Our final model included 24 independent variables, including demographic characteristics, utilization history, clinical factors from the first day of admission, and clinical factors from the remainder of the hospital stay (Table 2). The strongest independent predictor of readmission was hospital‐acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI] 1.18‐3.48); other hospital‐acquired complications including pressure ulcers and venous thromboembolism were also significant predictors. Though having Medicaid was associated with increased odds of readmission (AOR: 1.55, 95% CI: 1.31‐1.83), other zip codelevel measures of socioeconomic disadvantage were not predictive and were not included in the final model. Being discharged to hospice was associated with markedly lower odds of readmission (AOR: 0.23, 95% CI: 0.13‐0.40).

Final Full‐Stay EHR Model Predicting 30‐Day Readmissions (Derivation Cohort, N = 16,492)
Odds Ratio (95% CI)
Univariate Multivariate*
  • NOTE: Abbreviations: CI, confidence interval; ED, emergency department. *Values shown reflect adjusted odds ratios and 95% CI for each factor after adjustment for all other factors listed in the table.

Demographic characteristics
Age, per 10 years 1.08 (1.051.11) 1.07 (1.041.10)
Medicaid 1.97 (1.702.29) 1.55 (1.311.83)
Widow 1.44 (1.281.63) 1.27 (1.111.45)
Utilization history
Prior ED visit, per visit 1.08 (1.061.10) 1.04 (1.021.06)
Prior hospitalization, per hospitalization 1.30 (1.271.34) 1.16 (1.121.20)
Hospital and clinical factors from first day of hospitalization
Nonelective admission 1.75 (1.512.03) 1.42 (1.221.65)
Charlson Comorbidity Index, per point 1.19 (1.171.21) 1.06 (1.041.09)
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 2.57 (1.823.62) 1.52 (1.052.21)
Albumin 23 g/dL 1.68 (1.501.88) 1.20 (1.061.36)
Aspartate aminotransferase >40 U/L 1.37 (1.221.55) 1.21 (1.061.38)
Creatine phosphokinase <60 g/L 1.48 (1.301.69) 1.28 (1.111.46)
Mean corpuscular volume >100 fL/red cell 1.68 (1.382.04) 1.32 (1.071.62)
Platelets <90 103/L 2.20 (1.772.72) 1.56 (1.231.97)
Platelets >350 103/L 1.34 (1.171.54) 1.24 (1.081.44)
Prothrombin time >35 seconds 2.58 (1.743.82) 1.92 (1.272.90)
Hospital and clinical factors from remainder of hospital stay
Length of stay, per day 1.08 (1.071.09) 1.06 (1.041.07)
Hospital complications
Clostridium difficile infection 3.61 (2.195.95) 2.03 (1.183.48)
Pressure ulcer 2.43 (1.733.41) 1.64 (1.152.34)
Venous thromboembolism 2.01 (1.362.96) 1.55 (1.032.32)
Laboratory abnormalities at discharge
Blood urea nitrogen >20 mg/dL 1.86 (1.702.04) 1.37 (1.241.52)
Sodium <135 mEq/L 1.70 (1.521.91) 1.34 (1.181.51)
Hematocrit 27 1.61 (1.401.85) 1.22 (1.051.41)
Vital sign instability at discharge, per instability 1.29 (1.201.40) 1.25 (1.151.36)
Discharged to hospice 0.51 (0.300.89) 0.23 (0.130.40)

In our validation cohort, the full‐stay EHR model had fair discrimination, with a C statistic of 0.69 (95% CI: 0.68‐0.70) (Table 3). The full‐stay EHR model was well calibrated across all quintiles of risk, with slight overestimation of predicted risk in the lowest and highest quintiles (Figure 1a) (see Supporting Table 5 in the online version of this article). It also effectively stratified individuals across a broad range of predicted readmission risk from 4.1% in the lowest decile to 36.5% in the highest decile (Table 3).

Comparison of the Discrimination and Reclassification of Different Readmission Models*
Model Name C‐Statistic (95% CI) IDI, % (95% CI) NRI (95% CI) Average Predicted Risk, %
Lowest Decile Highest Decile
  • NOTE: Abbreviations; CI, confidence interval; EHR, electronic health record; IDI, Integrated Discrimination Improvement; NRI, Net Reclassification Index. *All measures were assessed using the validation cohort (N = 16,430), except for estimating the C‐statistic for the derivation cohort. P value <0.001 for all pairwise comparisons of C‐statistic between full‐stay model and first‐day, LACE, and HOSPITAL models, respectively. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Full‐stay EHR model
Derivation cohort 0.72 (0.70 to 0.73) 4.1 36.5
Validation cohort 0.69 (0.68 to 0.70) [Reference] [Reference] 4.1 36.5
First‐day EHR model 0.67 (0.66 to 0.68) 1.2 (1.4 to 1.0) 0.020 (0.038 to 0.002) 5.8 31.9
LACE model 0.65 (0.64 to 0.66) 2.6 (2.9 to 2.3) 0.046 (0.067 to 0.024) 6.1 27.5
HOSPITAL model 0.64 (0.62 to 0.65) 3.2 (3.5 to 2.9) 0.058 (0.080 to 0.035) 6.7 26.6
Figure 1
Comparison of the calibration of different readmission models. Calibration graphs for full‐stay (a), first‐day (b), LACE (c), and HOSPITAL (d) models in the validation cohort. Each graph shows predicted probability compared to observed probability of readmission by quintiles of risk for each model. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Comparing the Performance of the Full‐Stay EHR Model to Other Models

The full‐stay EHR model had better discrimination compared to the first‐day EHR model and the LACE and HOSPITAL models, though the magnitude of improvement was modest (Table 3). The full‐stay EHR model also stratified individuals across a broader range of readmission risk, and was better able to discriminate and classify those in the highest quintile of risk from those in the lowest 4 quintiles of risk compared to other models as assessed by the IDI and NRI (Table 3) (see Supporting Tables 14 and Supporting Figure 2 in the online version of this article). In terms of model calibration, both the first‐day EHR and LACE models were also well calibrated, whereas the HOSPITAL model was less robust (Figure 1).

The diagnostic accuracy of the full‐stay EHR model in correctly predicting those in the highest quintile of risk was better than that of the first‐day, LACE, and HOSPITAL models, though overall improvements in the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios were also modest (see Supporting Table 6 in the online version of this article).

DISCUSSION

In this study, we used clinically detailed EHR data from the entire hospitalization on 32,922 individuals treated in 6 diverse hospitals to develop an all‐payer, multicondition readmission risk‐prediction model. To our knowledge, this is the first 30‐day hospital readmission risk‐prediction model to use a comprehensive set of factors from EHR data from the entire hospital stay. Prior EHR‐based models have focused exclusively on data available on or prior to the first day of admission, which account for clinical severity on admission but do not account for factors uncovered during the inpatient stay that influence the chance of a postdischarge adverse outcome.[15, 30] We specifically assessed the prognostic impact of a comprehensive set of factors from the entire index hospitalization, including hospital‐acquired complications, clinical trajectory, and stability on discharge in predicting hospital readmissions. Our full‐stay EHR model had statistically better discrimination, calibration, and diagnostic accuracy than our existing all‐cause first‐day EHR model[15] and 2 previously published readmissions models that included more limited information from hospitalization (such as length of stay).[9, 10] However, although the more complicated full‐stay EHR model was statistically better than previously published models, we were surprised that the predictive performance was only modestly improved despite the inclusion of many additional clinically relevant prognostic factors.

Taken together, our study has several important implications. First, the added complexity and resource intensity of implementing a full‐stay EHR model yields only modestly improved readmission risk prediction. Thus, hospitals and healthcare systems interested in targeting their highest‐risk individuals for interventions to reduce 30‐day readmission should consider doing so within the first day of hospital admission. Our group's previously derived and validated first‐day EHR model, which used data only from the first day of admission, qualitatively performed nearly as well as the full‐stay EHR model.[15] Additionally, a recent study using only preadmission EHR data to predict 30‐day readmissions also achieved similar discrimination and diagnostic accuracy as our full‐stay model.[30]

Second, the field of readmissions risk‐prediction modeling may be reaching the maximum achievable model performance using data that are currently available in the EHR. Our limited ability to accurately predict all‐cause 30‐day readmission risk may reflect the influence of currently unmeasured patient, system, and community factors on readmissions.[31, 32, 33] Due to the constraints of data collected in the EHR, we were unable to include several patient‐level clinical characteristics associated with hospital readmission, including self‐perceived health status, functional impairment, and cognition.[33, 34, 35, 36] However, given their modest effect sizes (ORs ranging from 1.062.10), adequately measuring and including these risk factors in our model may not meaningfully improve model performance and diagnostic accuracy. Further, many social and behavioral patient‐level factors are also not consistently available in EHR data. Though we explored the role of several neighborhood‐level socioeconomic measuresincluding prevalence of poverty, median income, education, and unemploymentwe found that none were significantly associated with 30‐day readmissions. These particular measures may have been inadequate to characterize individual‐level social and behavioral factors, as several previous studies have demonstrated that patient‐level factors such as social support, substance abuse, and medication and visit adherence can influence readmission risk in heart failure and pneumonia.[11, 16, 22, 25] This underscores the need for more standardized routine collection of data across functional, social, and behavioral domains in clinical settings, as recently championed by the Institute of Medicine.[11, 37] Integrating data from outside the EHR on postdischarge health behaviors, self‐management, follow‐up care, recovery, and home environment may be another important but untapped strategy for further improving prediction of readmissions.[25, 38]

Third, a multicondition readmission risk‐prediction model may be a less effective strategy than more customized disease‐specific models for selected conditions associated with high 30‐day readmission rates. Our group's previously derived and internally validated models for heart failure and human immunodeficiency virus had superior discrimination compared to our full‐stay EHR model (C statistic of 0.72 for each).[11, 13] However, given differences in the included population and time periods studied, a head‐to‐head comparison of these different strategies is needed to assess differences in model performance and utility.

Our study had several strengths. To our knowledge, this is the first study to rigorously measure the additive influence of in‐hospital complications, clinical trajectory, and stability on discharge on the risk of 30‐day hospital readmission. Additionally, our study included a large, diverse study population that included all payers, all ages of adults, a mix of community, academic, and safety net hospitals, and individuals from a broad array of racial/ethnic and socioeconomic backgrounds.

Our results should be interpreted in light of several limitations. First, though we sought to represent a diverse group of hospitals, all study sites were located within north Texas and generalizability to other regions is uncertain. Second, our ascertainment of prior hospitalizations and readmissions was more inclusive than what could be typically accomplished in real time using only EHR data from a single clinical site. We performed a sensitivity analysis using only prior utilization data available within the EHR from the index hospital with no meaningful difference in our findings (data not shown). Additionally, a recent study found that 30‐day readmissions occur at the index hospital for over 75% of events, suggesting that 30‐day readmissions are fairly comprehensively captured even with only single‐site data.[39] Third, we were not able to include data on outpatient visits before or after the index hospitalization, which may influence the risk of readmission.[1, 40]

In conclusion, incorporating clinically granular EHR data from the entire course of hospitalization modestly improves prediction of 30‐day readmissions compared to models that only include information from the first 24 hours of hospital admission or models that use far fewer variables. However, given the limited improvement in prediction, our findings suggest that from the practical perspective of implementing real‐time models to identify those at highest risk for readmission, it may not be worth the added complexity of waiting until the end of a hospitalization to leverage additional data on hospital complications, and the trajectory of laboratory and vital sign values currently available in the EHR. Further improvement in prediction of readmissions will likely require accounting for psychosocial, functional, behavioral, and postdischarge factors not currently present in the inpatient EHR.

Disclosures: This study was presented at the Society of Hospital Medicine 2015 Annual Meeting in National Harbor, Maryland, and the Society of General Internal Medicine 2015 Annual Meeting in Toronto, Canada. This work was supported by the Agency for Healthcare Research and Qualityfunded UT Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01) and the Commonwealth Foundation (#20100323). Drs. Nguyen and Makam received funding from the UT Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). Dr. Halm was also supported in part by NIH/NCATS U54 RFA‐TR‐12‐006. The study sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

Unplanned hospital readmissions are frequent, costly, and potentially avoidable.[1, 2] Due to major federal financial readmissions penalties targeting excessive 30‐day readmissions, there is increasing attention to implementing hospital‐initiated interventions to reduce readmissions.[3, 4] However, universal enrollment of all hospitalized patients into such programs may be too resource intensive for many hospitals.[5] To optimize efficiency and effectiveness, interventions should be targeted to individuals most likely to benefit.[6, 7] However, existing readmission risk‐prediction models have achieved only modest discrimination, have largely used administrative claims data not available until months after discharge, or are limited to only a subset of patients with Medicare or a specific clinical condition.[8, 9, 10, 11, 12, 13, 14] These limitations have precluded accurate identification of high‐risk individuals in an all‐payer general medical inpatient population to provide actionable information for intervention prior to discharge.

Approaches using electronic health record (EHR) data could allow early identification of high‐risk patients during the index hospitalization to enable initiation of interventions prior to discharge. To date, such strategies have relied largely on EHR data from the day of admission.[15, 16] However, given that variation in 30‐day readmission rates are thought to reflect the quality of in‐hospital care, incorporating EHR data from the entire hospital stay to reflect hospital care processes and clinical trajectory may more accurately identify at‐risk patients.[17, 18, 19, 20] Improved accuracy in risk prediction would help better target intervention efforts in the immediate postdischarge period, an interval characterized by heightened vulnerability for adverse events.[21]

To help hospitals target transitional care interventions more effectively to high‐risk individuals prior to discharge, we derived and validated a readmissions risk‐prediction model incorporating EHR data from the entire course of the index hospitalization, which we termed the full‐stay EHR model. We also compared the full‐stay EHR model performance to our group's previously derived prediction model based on EHR data on the day of admission, termed the first‐day EHR model, as well as to 2 other validated readmission models similarly intended to yield near real‐time risk predictions prior to or shortly after hospital discharge.[9, 10, 15]

METHODS

Study Design, Population, and Data Sources

We conducted an observational cohort study using EHR data from 6 hospitals in the DallasFort Worth metroplex between November 1, 2009 and October 30, 2010 using the same EHR system (Epic Systems Corp., Verona, WI). One site was a university‐affiliated safety net hospital; the remaining 5 sites were teaching and nonteaching community sites.

We included consecutive hospitalizations among adults 18 years old discharged alive from any medicine inpatient service. For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization, were transferred to another acute care facility, left against medical advice, or who died outside of the hospital within 30 days of discharge. For model derivation, we randomly split the sample into separate derivation (50%) and validation cohorts (50%).

Outcomes

The primary outcome was 30‐day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100‐mile radius of Dallas, ascertained from an all‐payer regional hospitalization database. Nonelective hospitalizations included all hospitalizations classified as a emergency, urgent, or trauma, and excluded those classified as elective as per the Centers for Medicare and Medicaid Services Claim Inpatient Admission Type Code definitions.

Predictor Variables for the Full‐Stay EHR Model

The full‐stay EHR model was iteratively developed from our group's previously derived and validated risk‐prediction model using EHR data available on admission (first‐day EHR model).[15] For the full‐stay EHR model, we included all predictor variables included in our published first‐day EHR model as candidate risk factors. Based on prior literature, we additionally expanded candidate predictors available on admission to include marital status (proxy for social isolation) and socioeconomic disadvantage (percent poverty, unemployment, median income, and educational attainment by zip code of residence as proxy measures of the social and built environment).[22, 23, 24, 25, 26, 27] We also expanded the ascertainment of prior hospitalization to include admissions at both the index hospital and any of 75 acute care hospitals from the same, separate all‐payer regional hospitalization database used to ascertain 30‐day readmissions.

Candidate predictors from the remainder of the hospital stay (ie, following the first 24 hours of admission) were included if they were: (1) available in the EHR of all participating hospitals, (2) routinely collected or available at the time of hospital discharge, and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise. These included length of stay, in‐hospital complications, transfer to an intensive or coronary care unit, blood transfusions, vital sign instabilities within 24 hours of discharge, select laboratory values at time of discharge, and disposition status. We also assessed trajectories of vital signs and selected laboratory values (defined as changes in these measures from admission to discharge).

Statistical Analysis

Model Derivation

Univariate relationships between readmission and each of the candidate predictors were assessed in the derivation cohort using a prespecified significance threshold of P 0.05. We included all factors from our previously derived and validated first‐day EHR model as candidate predictors.[15] Continuous laboratory and vital sign values at the time of discharge were categorized based on clinically meaningful cutoffs; predictors with missing values were assumed to be normal (<1% missing for each variable). Significant univariate candidate variables were entered in a multivariate logistic regression model using stepwise backward selection with a prespecified significance threshold of P 0.05. We performed several sensitivity analyses to confirm the robustness of our model. First, we alternately derived the full‐stay model using stepwise forward selection. Second, we forced in all significant variables from our first‐day EHR model, and entered the candidate variables from the remainder of the hospital stay using both stepwise backward and forward selection separately. Third, prespecified interactions between variables were evaluated for inclusion. Though final predictors varied slightly between the different approaches, discrimination of each model was similar to the model derived using our primary analytic approach (C statistics 0.01, data not shown).

Model Validation

We assessed model discrimination and calibration of the derived full‐stay EHR model using the validation cohort. Model discrimination was estimated by the C statistic. The C statistic represents the probability that, given 2 hospitalized individuals (1 who was readmitted and the other who was not), the model will predict a higher risk for the readmitted patient than for the nonreadmitted patient. Model calibration was assessed by comparing predicted to observed probabilities of readmission by quintiles of risk, and with the Hosmer‐Lemeshow goodness‐of‐fit test.

Comparison to Existing Models

We compared the full‐stay EHR model performance to 3 previously published models: our group's first‐day EHR model, and the LACE (includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year) and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay) models, which were both derived to predict 30‐day readmissions among general medical inpatients and were intended to help clinicians identify high‐risk patients to target for discharge interventions.[9, 10, 15] We assessed each model's performance in our validation cohort, calculating the C statistic, integrated discrimination index (IDI), and net reclassification index (NRI) compared to the full‐stay model. IDI is a summary measure of both discrimination and reclassification, where more positive values suggest improvement in model performance in both these domains compared to a reference model.[28] The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.[29] The theoretical range of values is 2 to 2, with more positive values indicating improved net reclassification compared to a reference model. Here, we calculated a category‐based NRI to evaluate the performance of models in correctly classifying individuals with and without readmissions into the highest readmission risk quintile versus the lowest 4 risk quintiles compared to the full‐stay EHR model.[29] This prespecified cutoff is relevant for hospitals interested in identifying the highest‐risk individuals for targeted intervention.[6] Because some hospitals may be able to target a greater number of individuals for intervention, we performed a sensitivity analysis by assessing category‐based NRI for reclassification into the top 2 risk quintiles versus the lowest 3 risk quintiles and found no meaningful difference in our results (data not shown). Finally, we qualitatively assessed calibration of comparator models in our validation cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (StataCorp, College Station, TX). This study was approved by the UT Southwestern Medical Center institutional review board.

RESULTS

Overall, 32,922 index hospitalizations were included in our study cohort; 12.7% resulted in a 30‐day readmission (see Supporting Figure 1 in the online version of this article). Individuals had a mean age of 62 years and had diverse race/ethnicity and primary insurance status; half were female (Table 1). The study sample was randomly split into a derivation cohort (50%, n = 16,492) and validation cohort (50%, n = 16,430). Individuals in the derivation cohort with a 30‐day readmission had markedly different socioeconomic and clinical characteristics compared to those not readmitted (Table 1).

Baseline Characteristics and Candidate Variables for Risk‐Prediction Model
Entire Cohort, N = 32,922 Derivation Cohort, N = 16,492

No Readmission, N = 14,312

Readmission, N = 2,180

P Value
  • NOTE: Abbreviations: ED, emergency department; ICU, intensive care unit; IQR, interquartile range; SD, standard deviation. *20% poverty in zip code as per high poverty area US Census designation. Prior ED visit at site of index hospitalization within the past year. Prior hospitalization at any of 75 acute care hospitals in the North Texas region within the past year. Nonelective admission defined as hospitalization categorized as medical emergency, urgent, or trauma. ∥Calculated from diagnoses available within 1 year prior to index hospitalization. Conditions were considered complications if they were not listed as a principle diagnosis for hospitalization or as a previous diagnosis in the prior year. #On day of discharge or last known observation before discharge. Instabilities were defined as temperature 37.8C, heart rate >100 beats/minute, respiratory rate >24 breaths/minute, systolic blood pressure 90 mm Hg, or oxygen saturation <90%. **Discharges to nursing home, skilled nursing facility, or long‐term acute care hospital.

Demographic characteristics
Age, y, mean (SD) 62 (17.3) 61 (17.4) 64 (17.0) 0.001
Female, n (%) 17,715 (53.8) 7,694 (53.8) 1,163 (53.3) 0.72
Race/ethnicity 0.001
White 21,359 (64.9) 9,329 (65.2) 1,361 (62.4)
Black 5,964 (18.1) 2,520 (17.6) 434 (19.9)
Hispanic 4,452 (13.5) 1,931 (13.5) 338 (15.5)
Other 1,147 (3.5) 532 (3.7) 47 (2.2)
Marital status, n (%) 0.001
Single 8,076 (24.5) 3,516 (24.6) 514 (23.6)
Married 13,394 (40.7) 5,950 (41.6) 812 (37.3)
Separated/divorced 3,468 (10.5) 1,460 (10.2) 251 (11.5)
Widowed 4,487 (13.7) 1,868 (13.1) 388 (17.8)
Other 3,497 (10.6) 1,518 (10.6) 215 (9.9)
Primary payer, n (%) 0.001
Private 13,090 (39.8) 5,855 (40.9) 726 (33.3)
Medicare 13,015 (39.5) 5,597 (39.1) 987 (45.3)
Medicaid 2,204 (6.7) 852 (5.9) 242 (11.1)
Charity, self‐pay, or other 4,613 (14.0) 2,008 (14.0) 225 (10.3)
High‐poverty neighborhood, n (%)* 7,468 (22.7) 3,208 (22.4) 548 (25.1) 0.001
Utilization history
1 ED visits in past year, n (%) 9,299 (28.2) 3,793 (26.5) 823 (37.8) 0.001
1 hospitalizations in past year, n (%) 10,189 (30.9) 4,074 (28.5) 1,012 (46.4) 0.001
Clinical factors from first day of hospitalization
Nonelective admission, n (%) 27,818 (84.5) 11,960 (83.6) 1,960 (89.9) 0.001
Charlson Comorbidity Index, median (IQR)∥ 0 (01) 0 (00) 0 (03) 0.001
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 355 (1.1) 119 (0.8) 46 (2.1) 0.001
Albumin 23 g/dL 4,732 (14.4) 1,956 (13.7) 458 (21.0) 0.001
Aspartate aminotransferase >40 U/L 4,610 (14.0) 1,922 (13.4) 383 (17.6) 0.001
Creatine phosphokinase <60 g/L 3,728 (11.3) 1,536 (10.7) 330 (15.1) 0.001
Mean corpuscular volume >100 fL/red cell 1,346 (4.1) 537 (3.8) 134 (6.2) 0.001
Platelets <90 103/L 912 (2.8) 357 (2.5) 116 (5.3) 0.001
Platelets >350 103/L 3,332 (10.1) 1,433 (10.0) 283 (13.0) 0.001
Prothrombin time >35 seconds 248 (0.8) 90 (0.6) 35 (1.6) 0.001
Clinical factors from remainder of hospital stay
Length of stay, d, median (IQR) 4 (26) 4 (26) 5 (38) 0.001
ICU transfer after first 24 hours, n (%) 988 (3.0) 408 (2.9) 94 (4.3) 0.001
Hospital complications, n (%)
Clostridium difficile infection 119 (0.4) 44 (0.3) 24 (1.1) 0.001
Pressure ulcer 358 (1.1) 126 (0.9) 46 (2.1) 0.001
Venous thromboembolism 301 (0.9) 112 (0.8) 34 (1.6) 0.001
Respiratory failure 1,048 (3.2) 463 (3.2) 112 (5.1) 0.001
Central line‐associated bloodstream infection 22 (0.07) 6 (0.04) 5 (0.23) 0.005
Catheter‐associated urinary tract infection 47 (0.14) 20 (0.14) 6 (0.28) 0.15
Acute myocardial infarction 293 (0.9) 110 (0.8) 32 (1.5) 0.001
Pneumonia 1,754 (5.3) 719 (5.0) 154 (7.1) 0.001
Sepsis 853 (2.6) 368 (2.6) 73 (3.4) 0.04
Blood transfusion during hospitalization, n (%) 4,511 (13.7) 1,837 (12.8) 425 (19.5) 0.001
Laboratory abnormalities at discharge#
Blood urea nitrogen >20 mg/dL, n (%) 10,014 (30.4) 4,077 (28.5) 929 (42.6) 0.001
Sodium <135 mEq/L, n (%) 4,583 (13.9) 1,850 (12.9) 440 (20.2) 0.001
Hematocrit 27 3,104 (9.4) 1,231 (8.6) 287 (13.2) 0.001
1 vital sign instability at discharge, n (%)# 6,192 (18.8) 2,624 (18.3) 525 (24.1) 0.001
Discharge location, n (%) 0.001
Home 23,339 (70.9) 10,282 (71.8) 1,383 (63.4)
Home health 3,185 (9.7) 1,356 (9.5) 234 (10.7)
Postacute care** 5,990 (18.2) 2,496 (17.4) 549 (25.2)
Hospice 408 (1.2) 178 (1.2) 14 (0.6)

Derivation and Validation of the Full‐Stay EHR Model for 30‐Day Readmission

Our final model included 24 independent variables, including demographic characteristics, utilization history, clinical factors from the first day of admission, and clinical factors from the remainder of the hospital stay (Table 2). The strongest independent predictor of readmission was hospital‐acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI] 1.18‐3.48); other hospital‐acquired complications including pressure ulcers and venous thromboembolism were also significant predictors. Though having Medicaid was associated with increased odds of readmission (AOR: 1.55, 95% CI: 1.31‐1.83), other zip codelevel measures of socioeconomic disadvantage were not predictive and were not included in the final model. Being discharged to hospice was associated with markedly lower odds of readmission (AOR: 0.23, 95% CI: 0.13‐0.40).

Final Full‐Stay EHR Model Predicting 30‐Day Readmissions (Derivation Cohort, N = 16,492)
Odds Ratio (95% CI)
Univariate Multivariate*
  • NOTE: Abbreviations: CI, confidence interval; ED, emergency department. *Values shown reflect adjusted odds ratios and 95% CI for each factor after adjustment for all other factors listed in the table.

Demographic characteristics
Age, per 10 years 1.08 (1.051.11) 1.07 (1.041.10)
Medicaid 1.97 (1.702.29) 1.55 (1.311.83)
Widow 1.44 (1.281.63) 1.27 (1.111.45)
Utilization history
Prior ED visit, per visit 1.08 (1.061.10) 1.04 (1.021.06)
Prior hospitalization, per hospitalization 1.30 (1.271.34) 1.16 (1.121.20)
Hospital and clinical factors from first day of hospitalization
Nonelective admission 1.75 (1.512.03) 1.42 (1.221.65)
Charlson Comorbidity Index, per point 1.19 (1.171.21) 1.06 (1.041.09)
Laboratory abnormalities within 24 hours of admission
Albumin <2 g/dL 2.57 (1.823.62) 1.52 (1.052.21)
Albumin 23 g/dL 1.68 (1.501.88) 1.20 (1.061.36)
Aspartate aminotransferase >40 U/L 1.37 (1.221.55) 1.21 (1.061.38)
Creatine phosphokinase <60 g/L 1.48 (1.301.69) 1.28 (1.111.46)
Mean corpuscular volume >100 fL/red cell 1.68 (1.382.04) 1.32 (1.071.62)
Platelets <90 103/L 2.20 (1.772.72) 1.56 (1.231.97)
Platelets >350 103/L 1.34 (1.171.54) 1.24 (1.081.44)
Prothrombin time >35 seconds 2.58 (1.743.82) 1.92 (1.272.90)
Hospital and clinical factors from remainder of hospital stay
Length of stay, per day 1.08 (1.071.09) 1.06 (1.041.07)
Hospital complications
Clostridium difficile infection 3.61 (2.195.95) 2.03 (1.183.48)
Pressure ulcer 2.43 (1.733.41) 1.64 (1.152.34)
Venous thromboembolism 2.01 (1.362.96) 1.55 (1.032.32)
Laboratory abnormalities at discharge
Blood urea nitrogen >20 mg/dL 1.86 (1.702.04) 1.37 (1.241.52)
Sodium <135 mEq/L 1.70 (1.521.91) 1.34 (1.181.51)
Hematocrit 27 1.61 (1.401.85) 1.22 (1.051.41)
Vital sign instability at discharge, per instability 1.29 (1.201.40) 1.25 (1.151.36)
Discharged to hospice 0.51 (0.300.89) 0.23 (0.130.40)

In our validation cohort, the full‐stay EHR model had fair discrimination, with a C statistic of 0.69 (95% CI: 0.68‐0.70) (Table 3). The full‐stay EHR model was well calibrated across all quintiles of risk, with slight overestimation of predicted risk in the lowest and highest quintiles (Figure 1a) (see Supporting Table 5 in the online version of this article). It also effectively stratified individuals across a broad range of predicted readmission risk from 4.1% in the lowest decile to 36.5% in the highest decile (Table 3).

Comparison of the Discrimination and Reclassification of Different Readmission Models*
Model Name C‐Statistic (95% CI) IDI, % (95% CI) NRI (95% CI) Average Predicted Risk, %
Lowest Decile Highest Decile
  • NOTE: Abbreviations; CI, confidence interval; EHR, electronic health record; IDI, Integrated Discrimination Improvement; NRI, Net Reclassification Index. *All measures were assessed using the validation cohort (N = 16,430), except for estimating the C‐statistic for the derivation cohort. P value <0.001 for all pairwise comparisons of C‐statistic between full‐stay model and first‐day, LACE, and HOSPITAL models, respectively. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Full‐stay EHR model
Derivation cohort 0.72 (0.70 to 0.73) 4.1 36.5
Validation cohort 0.69 (0.68 to 0.70) [Reference] [Reference] 4.1 36.5
First‐day EHR model 0.67 (0.66 to 0.68) 1.2 (1.4 to 1.0) 0.020 (0.038 to 0.002) 5.8 31.9
LACE model 0.65 (0.64 to 0.66) 2.6 (2.9 to 2.3) 0.046 (0.067 to 0.024) 6.1 27.5
HOSPITAL model 0.64 (0.62 to 0.65) 3.2 (3.5 to 2.9) 0.058 (0.080 to 0.035) 6.7 26.6
Figure 1
Comparison of the calibration of different readmission models. Calibration graphs for full‐stay (a), first‐day (b), LACE (c), and HOSPITAL (d) models in the validation cohort. Each graph shows predicted probability compared to observed probability of readmission by quintiles of risk for each model. The LACE model includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year. The HOSPITAL model includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay.

Comparing the Performance of the Full‐Stay EHR Model to Other Models

The full‐stay EHR model had better discrimination compared to the first‐day EHR model and the LACE and HOSPITAL models, though the magnitude of improvement was modest (Table 3). The full‐stay EHR model also stratified individuals across a broader range of readmission risk, and was better able to discriminate and classify those in the highest quintile of risk from those in the lowest 4 quintiles of risk compared to other models as assessed by the IDI and NRI (Table 3) (see Supporting Tables 14 and Supporting Figure 2 in the online version of this article). In terms of model calibration, both the first‐day EHR and LACE models were also well calibrated, whereas the HOSPITAL model was less robust (Figure 1).

The diagnostic accuracy of the full‐stay EHR model in correctly predicting those in the highest quintile of risk was better than that of the first‐day, LACE, and HOSPITAL models, though overall improvements in the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios were also modest (see Supporting Table 6 in the online version of this article).

DISCUSSION

In this study, we used clinically detailed EHR data from the entire hospitalization on 32,922 individuals treated in 6 diverse hospitals to develop an all‐payer, multicondition readmission risk‐prediction model. To our knowledge, this is the first 30‐day hospital readmission risk‐prediction model to use a comprehensive set of factors from EHR data from the entire hospital stay. Prior EHR‐based models have focused exclusively on data available on or prior to the first day of admission, which account for clinical severity on admission but do not account for factors uncovered during the inpatient stay that influence the chance of a postdischarge adverse outcome.[15, 30] We specifically assessed the prognostic impact of a comprehensive set of factors from the entire index hospitalization, including hospital‐acquired complications, clinical trajectory, and stability on discharge in predicting hospital readmissions. Our full‐stay EHR model had statistically better discrimination, calibration, and diagnostic accuracy than our existing all‐cause first‐day EHR model[15] and 2 previously published readmissions models that included more limited information from hospitalization (such as length of stay).[9, 10] However, although the more complicated full‐stay EHR model was statistically better than previously published models, we were surprised that the predictive performance was only modestly improved despite the inclusion of many additional clinically relevant prognostic factors.

Taken together, our study has several important implications. First, the added complexity and resource intensity of implementing a full‐stay EHR model yields only modestly improved readmission risk prediction. Thus, hospitals and healthcare systems interested in targeting their highest‐risk individuals for interventions to reduce 30‐day readmission should consider doing so within the first day of hospital admission. Our group's previously derived and validated first‐day EHR model, which used data only from the first day of admission, qualitatively performed nearly as well as the full‐stay EHR model.[15] Additionally, a recent study using only preadmission EHR data to predict 30‐day readmissions also achieved similar discrimination and diagnostic accuracy as our full‐stay model.[30]

Second, the field of readmissions risk‐prediction modeling may be reaching the maximum achievable model performance using data that are currently available in the EHR. Our limited ability to accurately predict all‐cause 30‐day readmission risk may reflect the influence of currently unmeasured patient, system, and community factors on readmissions.[31, 32, 33] Due to the constraints of data collected in the EHR, we were unable to include several patient‐level clinical characteristics associated with hospital readmission, including self‐perceived health status, functional impairment, and cognition.[33, 34, 35, 36] However, given their modest effect sizes (ORs ranging from 1.062.10), adequately measuring and including these risk factors in our model may not meaningfully improve model performance and diagnostic accuracy. Further, many social and behavioral patient‐level factors are also not consistently available in EHR data. Though we explored the role of several neighborhood‐level socioeconomic measuresincluding prevalence of poverty, median income, education, and unemploymentwe found that none were significantly associated with 30‐day readmissions. These particular measures may have been inadequate to characterize individual‐level social and behavioral factors, as several previous studies have demonstrated that patient‐level factors such as social support, substance abuse, and medication and visit adherence can influence readmission risk in heart failure and pneumonia.[11, 16, 22, 25] This underscores the need for more standardized routine collection of data across functional, social, and behavioral domains in clinical settings, as recently championed by the Institute of Medicine.[11, 37] Integrating data from outside the EHR on postdischarge health behaviors, self‐management, follow‐up care, recovery, and home environment may be another important but untapped strategy for further improving prediction of readmissions.[25, 38]

Third, a multicondition readmission risk‐prediction model may be a less effective strategy than more customized disease‐specific models for selected conditions associated with high 30‐day readmission rates. Our group's previously derived and internally validated models for heart failure and human immunodeficiency virus had superior discrimination compared to our full‐stay EHR model (C statistic of 0.72 for each).[11, 13] However, given differences in the included population and time periods studied, a head‐to‐head comparison of these different strategies is needed to assess differences in model performance and utility.

Our study had several strengths. To our knowledge, this is the first study to rigorously measure the additive influence of in‐hospital complications, clinical trajectory, and stability on discharge on the risk of 30‐day hospital readmission. Additionally, our study included a large, diverse study population that included all payers, all ages of adults, a mix of community, academic, and safety net hospitals, and individuals from a broad array of racial/ethnic and socioeconomic backgrounds.

Our results should be interpreted in light of several limitations. First, though we sought to represent a diverse group of hospitals, all study sites were located within north Texas and generalizability to other regions is uncertain. Second, our ascertainment of prior hospitalizations and readmissions was more inclusive than what could be typically accomplished in real time using only EHR data from a single clinical site. We performed a sensitivity analysis using only prior utilization data available within the EHR from the index hospital with no meaningful difference in our findings (data not shown). Additionally, a recent study found that 30‐day readmissions occur at the index hospital for over 75% of events, suggesting that 30‐day readmissions are fairly comprehensively captured even with only single‐site data.[39] Third, we were not able to include data on outpatient visits before or after the index hospitalization, which may influence the risk of readmission.[1, 40]

In conclusion, incorporating clinically granular EHR data from the entire course of hospitalization modestly improves prediction of 30‐day readmissions compared to models that only include information from the first 24 hours of hospital admission or models that use far fewer variables. However, given the limited improvement in prediction, our findings suggest that from the practical perspective of implementing real‐time models to identify those at highest risk for readmission, it may not be worth the added complexity of waiting until the end of a hospitalization to leverage additional data on hospital complications, and the trajectory of laboratory and vital sign values currently available in the EHR. Further improvement in prediction of readmissions will likely require accounting for psychosocial, functional, behavioral, and postdischarge factors not currently present in the inpatient EHR.

Disclosures: This study was presented at the Society of Hospital Medicine 2015 Annual Meeting in National Harbor, Maryland, and the Society of General Internal Medicine 2015 Annual Meeting in Toronto, Canada. This work was supported by the Agency for Healthcare Research and Qualityfunded UT Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01) and the Commonwealth Foundation (#20100323). Drs. Nguyen and Makam received funding from the UT Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). Dr. Halm was also supported in part by NIH/NCATS U54 RFA‐TR‐12‐006. The study sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
  6. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  7. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):11481154.
  8. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  9. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  10. Donze J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632638.
  11. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  12. Singal AG, Rahimi RS, Clark C, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clin Gastroenterol Hepatol. 2013;11(10):13351341.e1331.
  13. Nijhawan AE, Clark C, Kaplan R, Moore B, Halm EA, Amarasingham R. An electronic medical record‐based model to predict 30‐day risk of readmission and death among HIV‐infected inpatients. J Acquir Immune Defic Syndr. 2012;61(3):349358.
  14. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital‐wide performance on 30‐day unplanned readmission. Ann Intern Med. 2014;161(10 suppl):S66S75.
  15. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record‐based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15(1):39.
  16. Watson AJ, O'Rourke J, Jethwani K, et al. Linking electronic health record‐extracted psychosocial data in real‐time to risk of readmission for heart failure. Psychosomatics. 2011;52(4):319327.
  17. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996;43(11):15331541.
  18. Dharmarajan K, Hsieh AF, Lin Z, et al. Hospital readmission performance and patterns of readmission: retrospective cohort study of Medicare admissions. BMJ. 2013;347:f6571.
  19. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):21452147.
  20. Bradley EH, Sipsma H, Horwitz LI, et al. Hospital strategy uptake and reductions in unplanned readmission rates for patients with heart failure: a prospective study. J Gen Intern Med. 2015;30(5):605611.
  21. Krumholz HM. Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100102.
  22. Calvillo‐King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269282.
  23. Keyhani S, Myers LJ, Cheng E, Hebert P, Williams LS, Bravata DM. Effect of clinical and social risk factors on hospital profiling for stroke readmission: a cohort study. Ann Intern Med. 2014;161(11):775784.
  24. Kind AJ, Jencks S, Brock J, et al. Neighborhood socioeconomic disadvantage and 30‐day rehospitalization: a retrospective cohort study. Ann Intern Med. 2014;161(11):765774.
  25. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  26. Hu J, Gonsahn MD, Nerenz DR. Socioeconomic status and readmissions: evidence from an urban teaching hospital. Health Aff (Millwood). 2014;33(5):778785.
  27. Nagasako EM, Reidhead M, Waterman B, Dunagan WC. Adding socioeconomic data to hospital readmissions calculations may produce more useful results. Health Aff (Millwood). 2014;33(5):786791.
  28. Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157172; discussion 207–212.
  29. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide. Ann Intern Med. 2014;160(2):122131.
  30. Shadmi E, Flaks‐Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30‐day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283289.
  31. Kangovi S, Grande D. Hospital readmissions—not just a measure of quality. JAMA. 2011;306(16):17961797.
  32. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  33. Greysen SR, Stijacic Cenzer I, Auerbach AD, Covinsky KE. Functional impairment and hospital readmission in medicare seniors. JAMA Intern Med. 2015;175(4):559565.
  34. Holloway JJ, Thomas JW, Shapiro L. Clinical and sociodemographic risk factors for readmission of Medicare beneficiaries. Health Care Financ Rev. 1988;10(1):2736.
  35. Patel A, Parikh R, Howell EH, Hsich E, Landers SH, Gorodeski EZ. Mini‐cog performance: novel marker of post discharge risk among patients hospitalized for heart failure. Circ Heart Fail. 2015;8(1):816.
  36. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277282.
  37. Adler NE, Stead WW. Patients in context—EHR capture of social and behavioral determinants of health. N Engl J Med. 2015;372(8):698701.
  38. Nguyen OK, Chan CV, Makam A, Stieglitz H, Amarasingham R. Envisioning a social‐health information exchange as a platform to support a patient‐centered medical neighborhood: a feasibility study. J Gen Intern Med. 2015;30(1):6067.
  39. Henke RM, Karaca Z, Lin H, Wier LM, Marder W, Wong HS. Patient factors contributing to variation in same‐hospital readmission rate. Med Care Res Review. 2015;72(3):338358.
  40. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):14411447.
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital‐initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433440.
  4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  5. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
  6. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  7. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):11481154.
  8. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  9. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  10. Donze J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632638.
  11. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  12. Singal AG, Rahimi RS, Clark C, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clin Gastroenterol Hepatol. 2013;11(10):13351341.e1331.
  13. Nijhawan AE, Clark C, Kaplan R, Moore B, Halm EA, Amarasingham R. An electronic medical record‐based model to predict 30‐day risk of readmission and death among HIV‐infected inpatients. J Acquir Immune Defic Syndr. 2012;61(3):349358.
  14. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital‐wide performance on 30‐day unplanned readmission. Ann Intern Med. 2014;161(10 suppl):S66S75.
  15. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record‐based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15(1):39.
  16. Watson AJ, O'Rourke J, Jethwani K, et al. Linking electronic health record‐extracted psychosocial data in real‐time to risk of readmission for heart failure. Psychosomatics. 2011;52(4):319327.
  17. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996;43(11):15331541.
  18. Dharmarajan K, Hsieh AF, Lin Z, et al. Hospital readmission performance and patterns of readmission: retrospective cohort study of Medicare admissions. BMJ. 2013;347:f6571.
  19. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):21452147.
  20. Bradley EH, Sipsma H, Horwitz LI, et al. Hospital strategy uptake and reductions in unplanned readmission rates for patients with heart failure: a prospective study. J Gen Intern Med. 2015;30(5):605611.
  21. Krumholz HM. Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100102.
  22. Calvillo‐King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269282.
  23. Keyhani S, Myers LJ, Cheng E, Hebert P, Williams LS, Bravata DM. Effect of clinical and social risk factors on hospital profiling for stroke readmission: a cohort study. Ann Intern Med. 2014;161(11):775784.
  24. Kind AJ, Jencks S, Brock J, et al. Neighborhood socioeconomic disadvantage and 30‐day rehospitalization: a retrospective cohort study. Ann Intern Med. 2014;161(11):765774.
  25. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  26. Hu J, Gonsahn MD, Nerenz DR. Socioeconomic status and readmissions: evidence from an urban teaching hospital. Health Aff (Millwood). 2014;33(5):778785.
  27. Nagasako EM, Reidhead M, Waterman B, Dunagan WC. Adding socioeconomic data to hospital readmissions calculations may produce more useful results. Health Aff (Millwood). 2014;33(5):786791.
  28. Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157172; discussion 207–212.
  29. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide. Ann Intern Med. 2014;160(2):122131.
  30. Shadmi E, Flaks‐Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30‐day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283289.
  31. Kangovi S, Grande D. Hospital readmissions—not just a measure of quality. JAMA. 2011;306(16):17961797.
  32. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  33. Greysen SR, Stijacic Cenzer I, Auerbach AD, Covinsky KE. Functional impairment and hospital readmission in medicare seniors. JAMA Intern Med. 2015;175(4):559565.
  34. Holloway JJ, Thomas JW, Shapiro L. Clinical and sociodemographic risk factors for readmission of Medicare beneficiaries. Health Care Financ Rev. 1988;10(1):2736.
  35. Patel A, Parikh R, Howell EH, Hsich E, Landers SH, Gorodeski EZ. Mini‐cog performance: novel marker of post discharge risk among patients hospitalized for heart failure. Circ Heart Fail. 2015;8(1):816.
  36. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277282.
  37. Adler NE, Stead WW. Patients in context—EHR capture of social and behavioral determinants of health. N Engl J Med. 2015;372(8):698701.
  38. Nguyen OK, Chan CV, Makam A, Stieglitz H, Amarasingham R. Envisioning a social‐health information exchange as a platform to support a patient‐centered medical neighborhood: a feasibility study. J Gen Intern Med. 2015;30(1):6067.
  39. Henke RM, Karaca Z, Lin H, Wier LM, Marder W, Wong HS. Patient factors contributing to variation in same‐hospital readmission rate. Med Care Res Review. 2015;72(3):338358.
  40. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):14411447.
Issue
Journal of Hospital Medicine - 11(7)
Issue
Journal of Hospital Medicine - 11(7)
Page Number
473-480
Page Number
473-480
Article Type
Display Headline
Predicting all‐cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison
Display Headline
Predicting all‐cause readmissions using electronic health record data from the entire hospitalization: Model development and comparison
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Oanh Kieu Nguyen, MD, 5323 Harry Hines Blvd., Dallas, Texas 75390‐9169; Telephone: 214‐648‐3135; Fax: 214‐648‐3232; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Financial Performance and Outcomes

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Relationship between hospital financial performance and publicly reported outcomes

Hospital care accounts for the single largest category of national healthcare expenditures, totaling $936.9 billion in 2013.[1] With ongoing scrutiny of US healthcare spending, hospitals are under increasing pressure to justify high costs and robust profits.[2] However, the dominant fee‐for‐service reimbursement model creates incentives for hospitals to prioritize high volume over high‐quality care to maximize profits.[3] Because hospitals may be reluctant to implement improvements if better quality is not accompanied by better payment or improved financial margins, an approach to stimulate quality improvement among hospitals has been to leverage consumer pressure through required public reporting of selected outcome metrics.[4, 5] Public reporting of outcomes is thought to influence hospital reputation; in turn, reputation affects patient perceptions and influences demand for hospital services, potentially enabling reputable hospitals to command higher prices for services to enhance hospital revenue.[6, 7]

Though improving outcomes is thought to reduce overall healthcare costs, it is unclear whether improving outcomes results in a hospital's financial return on investment.[4, 5, 8] Quality improvement can require substantial upfront investment, requiring that hospitals already have robust financial health to engage in such initiatives.[9, 10] Consequently, instead of stimulating broad efforts in quality improvement, public reporting may exacerbate existing disparities in hospital quality and finances, by rewarding already financially healthy hospitals, and by inadvertently penalizing hospitals without the means to invest in quality improvement.[11, 12, 13, 14, 15] Alternately, because fee‐for‐service remains the dominant reimbursement model for hospitals, loss of revenue through reducing readmissions may outweigh any financial gains from improved public reputation and result in worse overall financial performance, though robust evidence for this concern is lacking.[16, 17]

A small number of existing studies suggest a limited correlation between improved hospital financial performance and improved quality, patient safety, and lower readmission rates.[18, 19, 20] However, these studies had several limitations. They were conducted prior to public reporting of selected outcome metrics by the Centers for Medicare and Medicaid Services (CMS)[18, 19, 20]; used data from the Medicare Cost Report, which is not uniformly audited and thus prone to measurement error[19, 20]; used only relative measures of hospital financial performance (eg, operating margin), which do not capture the absolute amount of revenue potentially available for investment in quality improvement[18, 19]; or compared only hospitals at the extremes of financial performance, potentially exaggerating the magnitude of the relationship between hospital financial performance and quality outcomes.[19]

To address this gap in the literature, we sought to assess whether hospitals with robust financial performance have lower 30‐day risk‐standardized mortality and hospital readmission rates for acute myocardial infarction (AMI), congestive heart failure (CHF), and pneumonia (PNA). Given the concern that hospitals with the lowest mortality and readmission rates may experience a decrease in financial performance due to the lower volume of hospitalizations, we also assessed whether hospitals with the lowest readmission and mortality rates had a differential change in financial performance over time compared to hospitals with the highest rates.

METHODS

Data Sources and Study Population

This was an observational study using audited financial data from the 2008 and 2012 Hospital Annual Financial Data Files from the Office of Statewide Health Planning and Development (OSHPD) in the state of California, merged with data on outcome measures publicly reported by CMS via the Hospital Compare website for July 1, 2008 to June 30, 2011.[21, 22] We included all general acute care hospitals with available OSHPD data in 2008 and at least 1 publicly reported outcome from 2008 to 2011. We excluded hospitals without 1 year of audited financial data for 2008 and hospitals that closed during 2008 to 2011.

Measures of Financial Performance

Because we hypothesized that the absolute amount of revenue generated from clinical operations would influence investment in quality improvement programs more so than relative changes in revenue,[20] we used net revenue from operations (total operating revenue minus total operating expense) as our primary measure of hospital financial performance. We also performed 2 companion analyses using 2 commonly reported relative measures of financial performanceoperating margin (net revenue from operations divided by total operating revenue) and total margin (net total revenue divided by total revenue from all sources). Net revenue from operations for 2008 was adjusted to 2012 US dollars using the chained Consumer Price Index for all urban consumers.

Outcomes

For our primary analysis, the primary outcomes were publicly reported all‐cause 30‐day risk‐standardized mortality rates (RSMR) and readmission rates (RSRR) for AMI, CHF, and PNA aggregated over a 3‐year period. These measures were adjusted for key demographic and clinical characteristics available in Medicare data. CMS began publicly reporting 30‐day RSMR for AMI and CHF in June 2007, RSMR for PNA in June 2008, and RSRR for all 3 conditions in July 2009.[23, 24]

To assess whether public reporting had an effect on subsequent hospital financial performance, we conducted a companion analysis where the primary outcome of interest was change in hospital financial performance over time, using the same definitions of financial performance outlined above. For this companion analysis, publicly reported 30‐day RSMR and RSRR for AMI, CHF, and PNA were assessed as predictors of subsequent financial performance.

Hospital Characteristics

Hospital characteristics were ascertained from the OSHPD data. Safety‐net status was defined as hospitals with an annual Medicaid caseload (number of Medicaid discharges divided by the total number of discharges) 1 standard deviation above the mean Medicaid caseload, as defined in previous studies.[25]

Statistical Analyses

Effect of Baseline Financial Performance on Subsequent Publicly Reported Outcomes

To estimate the relationship between baseline hospital financial performance in 2008 and subsequent RSMR and RSRR for AMI, CHF, and PNA from 2008 to 2011, we used linear regression adjusted for the following hospital characteristics: teaching status, rural location, bed size, safety‐net status, ownership, Medicare caseload, and volume of cases reported for the respective outcome. We accounted for clustering of hospitals by ownership. We adjusted for hospital volume of reported cases for each condition given that the risk‐standardization models used by CMS shrink outcomes for small hospitals to the mean, and therefore do not account for a potential volume‐outcome relationship.[26] We conducted a sensitivity analysis excluding hospitals at the extremes of financial performance, defined as hospitals with extreme outlier values for each financial performance measure (eg, values more than 3 times the interquartile range above the first quartile or below the third quartile).[27] Nonlinearity of financial performance measures was assessed using restricted cubic splines. For ease of interpretation, we scaled the estimated change in RSMR and RSRR per $50 million increase in net revenue from operations, and graphed nonparametric relationships using restricted cubic splines.

Effect of Public Reporting on Subsequent Hospital Financial Performance

To assess whether public reporting had an effect on subsequent hospital financial performance, we conducted a companion hospital‐level difference‐in‐differences analysis to assess for differential changes in hospital financial performance between 2008 and 2012, stratified by tertiles of RSMR and RSRR rates from 2008 to 2011. This approach compares differences in an outcome of interest (hospital financial performance) within each group (where each group is a tertile of publicly reported rates of RSMR or RSRR), and then compares the difference in these differences between groups. Therefore, these analyses use each group as their own historical control and the opposite group as a concurrent control to account for potential secular trends. To conduct our difference‐in‐differences analysis, we compared the change in financial performance over time in the top tertile of hospitals to the change in financial performance over time in the bottom tertile of hospitals with respect to AMI, CHF, and PNA RSMR and RSRR. Our models therefore included year (2008 vs 2012), tertile of publicly reported rates for RSMR or RSRR, and the interaction between them as predictors, where the interaction was the difference‐in‐differences term and the primary predictor of interest. In addition to adjusting for hospital characteristics and accounting for clustering as mentioned above, we also included 3 separate interaction terms for year with bed size, safety‐net status, and Medicare caseload, to account for potential changes in the hospitals over time that may have independently influenced financial performance and publicly reported 30‐day measures. For sensitivity analyses, we repeated our difference‐in‐differences analyses excluding hospitals with a change in ownership and extreme outliers with respect to financial performance in 2008. We performed model diagnostics including assessment of functional form, linearity, normality, constant variance, and model misspecification. All analyses were conducted using Stata version 12.1 (StataCorp, College Station, TX). This study was deemed exempt from review by the UT Southwestern Medical Center institutional review board.

RESULTS

Among the 279 included hospitals (see Supporting Figure 1 in the online version of this article), 278 also had financial data available for 2012. In 2008, the median net revenue from operations was $1.6 million (interquartile range [IQR], $2.4 to $10.3 million), the median operating margin was 1.5% (IQR, 4.6% to 6%), and the median total margin was 2.5% (IQR, 2.2% to 7.5% (Table 1). The number of hospitals reporting each outcome, and median outcome rates, are shown in Table 2.

Hospital Characteristics and Financial Performance in 2008 and 2012
2008, n = 279 2012, n = 278
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation. *Medicaid caseload equivalent to 1 standard deviation above the mean (41.8% for 2008 and 42.1% for 2012). Operated by an investor‐individual, investor‐partnership, or investor‐corporation.

Hospital characteristics
Teaching, n (%) 28 (10.0) 28 (10.0)
Rural, n (%) 55 (19.7) 55 (19.7)
Bed size, n (%)
099 (small) 57 (20.4) 55 (19.8)
100299 (medium) 130 (46.6) 132 (47.5)
300 (large) 92 (33.0) 91 (32.7)
Safety‐net hospital, n (%)* 46 (16.5) 48 (17.3)
Hospital ownership, n (%)
City or county 15 (5.4) 16 (5.8)
District 42 (15.1) 39 (14.0)
Investor 66 (23.7) 66 (23.7)
Private nonprofit 156 (55.9) 157 (56.5)
Medicare caseload, mean % (SD) 41.6 (14.7) 43.6 (14.7)
Financial performance measures
Net revenue from operations, median $ in millions (IQR; range) 1.6 (2.4 to 10.3; 495.9 to 144.1) 3.2 (2.9 to 15.4; 396.2 to 276.8)
Operating margin, median % (IQR; range) 1.5 (4.6 to 6.8; 77.8 to 26.4) 2.3 (3.9 to 8.2; 134.8 to 21.1)
Total margin, median % (IQR; range) 2.5 (2.2 to 7.5; 101.0 to 26.3) 4.5 (0.7 to 9.8; 132.2 to 31.1)
Relationship Between Hospital Financial Performance and 30‐Day Mortality and Readmission Rates*
No. Median % (IQR) Adjusted % Change (95% CI) per $50 Million Increase in Net Revenue From Operations
Overall Extreme Outliers Excluded
  • NOTE: Abbreviations: CI, confidence interval; IQR, interquartile range. *Thirty‐day outcomes are risk standardized for age, sex, comorbidity count, and indicators of patient frailty.[3] Each outcome was modeled separately and adjusted for teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership type, Medicare caseload, and volume of cases reported for the respective outcome, accounting for clustering of hospitals by owner. Twenty‐three hospitals were identified as extreme outliers with respect to net revenue from operations (10 underperformers with net revenue <$49.4 million and 13 overperformers with net revenue >$52.1 million). There was a nonlinear and statistically significant relationship between net revenue from operations and readmission rate for myocardial infarction. Net revenue from operations was modeled as a cubic spline function. See Figure 1. The overall adjusted F statistic was 4.8 (P < 0.001). There was a nonlinear and statistically significant relationship between net revenue from operations and mortality rate for heart failure after exclusion of extreme outliers. Net revenue from operations was modeled as a cubic spline function. See Figure 1.The overall adjusted F statistic was 3.6 (P = 0.008).

Myocardial infarction
Mortality rate 211 15.2 (14.216.2) 0.07 (0.10 to 0.24) 0.63 (0.21 to 1.48)
Readmission rate 184 19.4 (18.520.2) Nonlinear 0.34 (1.17 to 0.50)
Congestive heart failure
Mortality rate 259 11.1 (10.112.1) 0.17 (0.01 to 0.35) Nonlinear
Readmission rate 264 24.5 (23.525.6) 0.07 (0.27 to 0.14) 0.45 (1.36 to 0.47)
Pneumonia
Mortality rate 268 11.6 (10.413.2) 0.17 (0.42 to 0.07) 0.35 (1.19 to 0.49)
Readmission rate 268 18.2 (17.319.1) 0.04 (0.20 to 0.11) 0.56 (1.27 to 0.16)

Relationship Between Financial Performance and Publicly Reported Outcomes

Acute Myocardial Infarction

We did not observe a consistent relationship between hospital financial performance and AMI mortality and readmission rates. In our overall adjusted analyses, net revenue from operations was not associated with mortality, but was significantly associated with a decrease in AMI readmissions among hospitals with net revenue from operations between approximately $5 million to $145 million (nonlinear relationship, F statistic = 4.8, P < 0.001 (Table 2, Figure 1A). However, after excluding 23 extreme outlying hospitals by net revenue from operations (10 underperformers with net revenue <$49.4 million and 13 overperformers with net revenue >$52.1 million), this relationship was no longer observed. Using operating margin instead of net revenue from operations as the measure of hospital financial performance, we observed a 0.2% increase in AMI mortality (95% confidence interval [CI]: 0.06%‐0.35%) (see Supporting Table 1 and Supporting Figure 2 in the online version of this article) for each 10% increase in operating margin, which persisted with the exclusion of 5 outlying hospitals by operating margin (all 5 were underperformers, with operating margins <38.6%). However, using total margin as the measure of financial performance, there was no significant relationship with either mortality or readmissions (see Supporting Table 2 and Supporting Figure 3 in the online version of this article).

Figure 1
Relationship between financial performance and 30‐day readmission and mortality. The open circles represent individual hospitals. The bold dashed line and the bold solid line are the unadjusted and adjusted cubic spline curves, respectively, representing the nonlinear relationship between net revenue from operations and each outcome. The shaded grey area represents the 95% confidence interval for the adjusted cubic spline curve. Thin vertical dashed lines represent median values for net revenue from operations. Multivariate models were adjusted for teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership, Medicare caseload, and volume of cases reported for the respective outcome, accounting for clustering of hospitals by owner. *Twenty‐three hospitals were identified as outliers with respect to net revenue from clinical operations (10 “underperformers” with net revenue <−$49.4 million and 13 “overperformers” with net revenue >$52.1 million.

Congestive Heart Failure

In our primary analyses, we did not observe a significant relationship between net revenue from operations and CHF mortality and readmission rates. However, after excluding 23 extreme outliers, increasing net revenue from operations was associated with a modest increase in CHF mortality among hospitals, with net revenue between approximately $35 million and $20 million (nonlinear relationship, F statistic = 3.6, P = 0.008 (Table 2, Figure 1B). Using alternate measures of financial performance, we observed a consistent relationship between increasing hospital financial performance and higher 30‐day CHF mortality rate. Using operating margin, we observed a slight increase in the mortality rate for CHF (0.26% increase in CHF RSMR for every 10% increase in operating margin) (95% CI: 0.07%‐0.45%) (see Supporting Table 1 and Supporting Figure 2 in the online version of this article), which persisted after the exclusion of 5 extreme outliers. Using total margin, we observed a significant but modest association between improved hospital financial performance and increased mortality rate for CHF (nonlinear relationship, F statistic = 2.9, P = 0.03) (see Supporting Table 2 and Supporting Figure 3 in the online version of this article), which persisted after the exclusion of 3 extreme outliers (0.32% increase in CHF RSMR for every 10% increase in total margin) (95% CI: 0.03%‐0.62%).

Pneumonia

Hospital financial performance (using net revenue, operating margin, or total margin) was not associated with 30‐day PNA mortality or readmission rates.

Relationship of Readmission and Mortality Rates on Subsequent Hospital Financial Performance

Compared to hospitals in the highest tertile of readmission and mortality rates (ie, those with the worst rates), hospitals in the lowest tertile of readmission and mortality rates (ie, those with the best rates) had a similar magnitude of increase in net revenue from operations from 2008 to 2012 (Table 3). The difference‐in‐differences analyses showed no relationship between readmission or mortality rates for AMI, CHF, and PNA and changes in net revenue from operations from 2008 to 2012 (difference‐in‐differences estimates ranged from $8.61 to $6.77 million, P > 0.3 for all). These results were robust to the exclusion of hospitals with a change in ownership and extreme outliers by net revenue from operations (data not reported).

Difference in the Differences in Financial Performance Between the Worst‐ and the Best‐Performing Hospitals
Outcome Tertile With Highest Outcome Rates (Worst Hospitals) Tertile With Lowest Outcome Rates (Best Hospitals) Difference in Net From Operations Differences Between Highest and Lowest Outcome Rate Tertiles, $ Million (95% CI) P
Outcome, Median % (IQR) Gain/Loss in Net Revenue From Operations From 2008 to 2012, $ Million* Outcome, Median % (IQR) Gain/Loss in Net Revenue from Operations From 2008 to 2012, $ Million*
  • NOTE: Abbreviations: AMI, acute myocardial infarction; CHF, congestive heart failure; CI, confidence interval; IQR, interquartile range; PNA, pneumonia. *Differences were calculated as net revenue from clinical operations in 2012 minus net revenue from clinical operations in 2008. Net revenue in 2008 was adjusted to 2012 US dollars using the chained Consumer Price Index for all urban consumers. Each outcome was modeled separately and adjusted for year, tertile of performance for the respective outcome, the interaction between year and tertile (difference‐in‐differences term), teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership type, Medicare caseload, volume of cases reported for the respective outcome, and interactions for year with bed size, safety‐net hospital status, and Medicare caseload, accounting for clustering of hospitals by owner.

AMI mortality 16.7 (16.217.4) +65.62 13.8 (13.314.2) +74.23 8.61 (27.95 to 10.73) 0.38
AMI readmit 20.7 (20.321.5) +38.62 18.3 (17.718.6) +31.85 +6.77 (13.24 to 26.77) 0.50
CHF mortality 13.0 (12.313.9) +45.66 9.6 (8.910.1) +48.60 2.94 (11.61 to 5.73) 0.50
CHF readmit 26.2 (25.726.9) +47.08 23.0 (22.323.5) +46.08 +0.99 (10.51 to 12.50) 0.87
PNA mortality 13.9 (13.314.7) +43.46 9.9 (9.310.4) +38.28 +5.18 (7.01 to 17.37) 0.40
PNA readmit 19.4 (19.120.1) +47.21 17.0 (16.517.3) +45.45 +1.76 (8.34 to 11.86) 0.73

DISCUSSION

Using audited financial data from California hospitals in 2008 and 2012, and CMS data on publicly reported outcomes from 2008 to 2011, we found no consistent relationship between hospital financial performance and publicly reported outcomes for AMI and PNA. However, better hospital financial performance was associated with a modest increase in 30‐day risk‐standardized CHF mortality rates, which was consistent across all 3 measures of hospital financial performance. Reassuringly, there was no difference in the change in net revenue from operations between 2008 and 2012 between hospitals in the highest and lowest tertiles of readmission and mortality rates for AMI, CHF, and PNA. In other words, hospitals with the lowest rates of 30‐day readmissions and mortality for AMI, CHF, and PNA did not experience a loss in net revenue from operations over time, compared to hospitals with the highest readmission and mortality rates.

Our study differs in several important ways from Ly et al., the only other study to our knowledge that investigated the relationship between hospital financial performance and outcomes for patients with AMI, CHF, and PNA.[19] First, outcomes in the Ly et al. study were ascertained in 2007, which preceded public reporting of outcomes. Second, the primary comparison was between hospitals in the bottom versus top decile of operating margin. Although Ly and colleagues also found no association between hospital financial performance and mortality rates for these 3 conditions, they found a significant absolute decrease of approximately 3% in readmission rates among hospitals in the top decile of operating margin versus those in bottom decile. However, readmission rates were comparable among the remaining 80% of hospitals, suggesting that these findings primarily reflected the influence of a few outlier hospitals. Third, the use of nonuniformly audited hospital financial data may have resulted in misclassification of financial performance. Our findings also differ from 2 previous studies that identified a modest association between improved hospital financial performance and decreased adverse patient safety events.[18, 20] However, publicly reported outcomes may not be fully representative of hospital quality and patient safety.[28, 29]

The limited association between hospital financial performance and publicly reported outcomes for AMI and PNA is noteworthy for several reasons. First, publicly reporting outcomes alone without concomitant changes to reimbursement may be inadequate to create strong financial incentives for hospital investment in quality improvement initiatives. Hospitals participating in both public reporting of outcomes and pay‐for‐performance have been shown to achieve greater improvements in outcomes than hospitals engaged only in public reporting.[30] Our time interval for ascertainment of outcomes preceded CMS implementation of the Hospital Readmissions Reduction Program (HRRP) in October 2012, which withholds up to 3% of Medicare hospital reimbursements for higher than expected mortality and readmission rates for AMI, CHF, and PNA. Once outcomes data become available for a 3‐year post‐HRRP implementation period, the impact of this combined approach can be assessed. Second, because adherence to many evidence‐based process measures for these conditions (ie, aspirin use in AMI) is already high, there may be a ceiling effect present that obviates the need for further hospital financial investment to optimize delivery of best practices.[31, 32] Third, hospitals themselves may contribute little to variation in mortality and readmission risk. Of the total variation in mortality and readmission rates among Texas Medicare beneficiaries, only about 1% is attributable to hospitals, whereas 42% to 56% of the variation is explained by differences in patient characteristics.[33, 34] Fourth, there is either low‐quality or insufficient evidence that transitional care interventions specifically targeted to patients with AMI or PNA result in better outcomes.[35] Thus, greater financial investment in hospital‐initiated and postdischarge transitional care interventions for these specific conditions may result in less than the desired effect. Lastly, many hospitalizations for these conditions are emergency hospitalizations that occur after patients present to the emergency department with unexpected and potentially life‐threatening symptoms. Thus, patients may not be able to incorporate the reputation or performance metrics of a hospital in their decisions for where they are hospitalized for AMI, CHF, or PNA despite the public reporting of outcomes.

Given the strong evidence that transitional care interventions reduce readmissions and mortality among patients hospitalized with CHF, we were surprised to find that improved hospital financial performance was associated with an increased risk‐adjusted CHF mortality rate.[36] This association held true for all 3 different measures of hospital financial performance, suggesting that this unexpected finding is unlikely to be the result of statistical chance, though potential reasons for this association remain unclear. One possibility is that the CMS model for CHF mortality may not adequately risk adjust for severity of illness.[37, 38] Thus, robust financial performance may be a marker for hospitals with more advanced heart failure services that care for more patients with severe illness.

Our findings should be interpreted in the context of certain limitations. Our study only included an analysis of outcomes for AMI, CHF, and PNA among older fee‐for‐service Medicare beneficiaries aggregated at the hospital level in California between 2008 and 2012, so generalizability to other populations, conditions, states, and time periods is uncertain. The observational design precludes a robust causal inference between financial performance and outcomes. For readmissions, rates were publicly reported for only the last 2 years of the 3‐year reporting period; thus, our findings may underestimate the association between hospital financial performance and publicly reported readmission rates.

CONCLUSION

There is no consistent relationship between hospital financial performance and subsequent publicly reported outcomes for AMI and PNA. However, for unclear reasons, hospitals with better financial performance had modestly higher CHF mortality rates. Given this limited association, public reporting of outcomes may have had less than the intended impact in motivating hospitals to invest in quality improvement. Additional financial incentives in addition to public reporting, such as readmissions penalties, may help motivate hospitals with robust financial performance to further improve outcomes. This would be a key area for future investigation once outcomes data are available for the 3‐year period following CMS implementation of readmissions penalties in 2012. Reassuringly, there was no association between low 30‐day mortality and readmissions rates and subsequent poor financial performance, suggesting that improved outcomes do not necessarily lead to loss of revenue.

Disclosures

Drs. Nguyen, Halm, and Makam were supported in part by the Agency for Healthcare Research and Quality University of Texas Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01). Drs. Nguyen and Makam received funding from the University of Texas Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). The study sponsors had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

Files
References
  1. Centers for Medicare and Medicaid Services. Office of the Actuary. National Health Statistics Group. National Healthcare Expenditures Data. Baltimore, MD; 2013. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical.html. Accessed February 16, 2016.
  2. Brill S. Bitter pill: why medical bills are killing us. Time Magazine. February 20, 2013:1655.
  3. Ginsburg PB. Fee‐for‐service will remain a feature of major payment reforms, requiring more changes in Medicare physician payment. Health Aff (Millwood). 2012;31(9):19771983.
  4. Leatherman S, Berwick D, Iles D, et al. The business case for quality: case studies and an analysis. Health Aff (Millwood). 2003;22(2):1730.
  5. Marshall MN, Shekelle PG, Davies HT, Smith PC. Public reporting on quality in the United States and the United Kingdom. Health Aff (Millwood). 2003;22(3):134148.
  6. Swensen SJ, Dilling JA, Mc Carty PM, Bolton JW, Harper CM. The business case for health‐care quality improvement. J Patient Saf. 2013;9(1):4452.
  7. Hibbard JH, Stockard J, Tusler M. Hospital performance reports: impact on quality, market share, and reputation. Health Aff (Millwood). 2005;24(4):11501160.
  8. Rauh SS, Wadsworth EB, Weeks WB, Weinstein JN. The savings illusion—why clinical quality improvement fails to deliver bottom‐line results. N Engl J Med. 2011;365(26):e48.
  9. Meyer JA, Silow‐Carroll S, Kutyla T, Stepnick LS, Rybowski LS. Hospital Quality: Ingredients for Success—Overview and Lessons Learned. New York, NY: Commonwealth Fund; 2004.
  10. Silow‐Carroll S, Alteras T, Meyer JA. Hospital Quality Improvement: Strategies and Lessons from U.S. Hospitals. New York, NY: Commonwealth Fund; 2007.
  11. Bazzoli GJ, Clement JP, Lindrooth RC, et al. Hospital financial condition and operational decisions related to the quality of hospital care. Med Care Res Rev. 2007;64(2):148168.
  12. Casalino LP, Elster A, Eisenberg A, Lewis E, Montgomery J, Ramos D. Will pay‐for‐performance and quality reporting affect health care disparities? Health Aff (Millwood). 2007;26(3):w405w414.
  13. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety‐net and non‐safety‐net hospitals. JAMA. 2008;299(18):21802187.
  14. Bhalla R, Kalkut G. Could Medicare readmission policy exacerbate health care system inequity? Ann Intern Med. 2010;152(2):114117.
  15. Hernandez AF, Curtis LH. Minding the gap between efforts to reduce readmissions and disparities. JAMA. 2011;305(7):715716.
  16. Terry DF, Moisuk S. Medicare Health Support Pilot Program. N Engl J Med. 2012;366(7):666; author reply 667–668.
  17. Fontanarosa PB, McNutt RA. Revisiting hospital readmissions. JAMA. 2013;309(4):398400.
  18. Encinosa WE, Bernard DM. Hospital finances and patient safety outcomes. Inquiry. 2005;42(1):6072.
  19. Ly DP, Jha AK, Epstein AM. The association between hospital margins, quality of care, and closure or other change in operating status. J Gen Intern Med. 2011;26(11):12911296.
  20. Bazzoli GJ, Chen HF, Zhao M, Lindrooth RC. Hospital financial condition and the quality of patient care. Health Econ. 2008;17(8):977995.
  21. State of California Office of Statewide Health Planning and Development. Healthcare Information Division. Annual financial data. Available at: http://www.oshpd.ca.gov/HID/Products/Hospitals/AnnFinanData/PivotProfles/default.asp. Accessed June 23, 2015.
  22. Centers for Medicare 4(1):1113.
  23. Ross JS, Cha SS, Epstein AJ, et al. Quality of care for acute myocardial infarction at urban safety‐net hospitals. Health Aff (Millwood). 2007;26(1):238248.
  24. Silber JH, Rosenbaum PR, Brachet TJ, et al. The Hospital Compare mortality model and the volume‐outcome relationship. Health Serv Res. 2010;45(5 Pt 1):11481167.
  25. Tukey J. Exploratory Data Analysis. Boston, MA: Addison‐Wesley; 1977.
  26. Press MJ, Scanlon DP, Ryan AM, et al. Limits of readmission rates in measuring hospital quality suggest the need for added metrics. Health Aff (Millwood). 2013;32(6):10831091.
  27. Stefan MS, Pekow PS, Nsa W, et al. Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377385.
  28. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486496.
  29. Spatz ES, Sheth SD, Gosch KL, et al. Usual source of care and outcomes following acute myocardial infarction. J Gen Intern Med. 2014;29(6):862869.
  30. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood). 2010;29(7):13191324.
  31. Goodwin JS, Lin YL, Singh S, Kuo YF. Variation in length of stay and outcomes among hospitalized patients attributable to hospitals and hospitalists. J Gen Intern Med. 2013;28(3):370376.
  32. Singh S, Lin YL, Kuo YF, Nattinger AB, Goodwin JS. Variation in the risk of readmission among hospitals: the relative contribution of patient, hospital and inpatient provider characteristics. J Gen Intern Med. 2014;29(4):572578.
  33. Prvu Bettger J, Alexander KP, Dolor RJ, et al. Transitional care after hospitalization for acute stroke or myocardial infarction: a systematic review. Ann Intern Med. 2012;157(6):407416.
  34. Jha AK, Orav EJ, Li Z, Epstein AM. The inverse relationship between mortality rates and performance in the Hospital Quality Alliance measures. Health Aff (Millwood). 2007;26(4):11041110.
  35. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  36. Fuller RL, Atkinson G, Hughes JS. Indications of biased risk adjustment in the hospital readmission reduction program. J Ambul Care Manage. 2015;38(1):3947.
Article PDF
Issue
Journal of Hospital Medicine - 11(7)
Page Number
481-488
Sections
Files
Files
Article PDF
Article PDF

Hospital care accounts for the single largest category of national healthcare expenditures, totaling $936.9 billion in 2013.[1] With ongoing scrutiny of US healthcare spending, hospitals are under increasing pressure to justify high costs and robust profits.[2] However, the dominant fee‐for‐service reimbursement model creates incentives for hospitals to prioritize high volume over high‐quality care to maximize profits.[3] Because hospitals may be reluctant to implement improvements if better quality is not accompanied by better payment or improved financial margins, an approach to stimulate quality improvement among hospitals has been to leverage consumer pressure through required public reporting of selected outcome metrics.[4, 5] Public reporting of outcomes is thought to influence hospital reputation; in turn, reputation affects patient perceptions and influences demand for hospital services, potentially enabling reputable hospitals to command higher prices for services to enhance hospital revenue.[6, 7]

Though improving outcomes is thought to reduce overall healthcare costs, it is unclear whether improving outcomes results in a hospital's financial return on investment.[4, 5, 8] Quality improvement can require substantial upfront investment, requiring that hospitals already have robust financial health to engage in such initiatives.[9, 10] Consequently, instead of stimulating broad efforts in quality improvement, public reporting may exacerbate existing disparities in hospital quality and finances, by rewarding already financially healthy hospitals, and by inadvertently penalizing hospitals without the means to invest in quality improvement.[11, 12, 13, 14, 15] Alternately, because fee‐for‐service remains the dominant reimbursement model for hospitals, loss of revenue through reducing readmissions may outweigh any financial gains from improved public reputation and result in worse overall financial performance, though robust evidence for this concern is lacking.[16, 17]

A small number of existing studies suggest a limited correlation between improved hospital financial performance and improved quality, patient safety, and lower readmission rates.[18, 19, 20] However, these studies had several limitations. They were conducted prior to public reporting of selected outcome metrics by the Centers for Medicare and Medicaid Services (CMS)[18, 19, 20]; used data from the Medicare Cost Report, which is not uniformly audited and thus prone to measurement error[19, 20]; used only relative measures of hospital financial performance (eg, operating margin), which do not capture the absolute amount of revenue potentially available for investment in quality improvement[18, 19]; or compared only hospitals at the extremes of financial performance, potentially exaggerating the magnitude of the relationship between hospital financial performance and quality outcomes.[19]

To address this gap in the literature, we sought to assess whether hospitals with robust financial performance have lower 30‐day risk‐standardized mortality and hospital readmission rates for acute myocardial infarction (AMI), congestive heart failure (CHF), and pneumonia (PNA). Given the concern that hospitals with the lowest mortality and readmission rates may experience a decrease in financial performance due to the lower volume of hospitalizations, we also assessed whether hospitals with the lowest readmission and mortality rates had a differential change in financial performance over time compared to hospitals with the highest rates.

METHODS

Data Sources and Study Population

This was an observational study using audited financial data from the 2008 and 2012 Hospital Annual Financial Data Files from the Office of Statewide Health Planning and Development (OSHPD) in the state of California, merged with data on outcome measures publicly reported by CMS via the Hospital Compare website for July 1, 2008 to June 30, 2011.[21, 22] We included all general acute care hospitals with available OSHPD data in 2008 and at least 1 publicly reported outcome from 2008 to 2011. We excluded hospitals without 1 year of audited financial data for 2008 and hospitals that closed during 2008 to 2011.

Measures of Financial Performance

Because we hypothesized that the absolute amount of revenue generated from clinical operations would influence investment in quality improvement programs more so than relative changes in revenue,[20] we used net revenue from operations (total operating revenue minus total operating expense) as our primary measure of hospital financial performance. We also performed 2 companion analyses using 2 commonly reported relative measures of financial performanceoperating margin (net revenue from operations divided by total operating revenue) and total margin (net total revenue divided by total revenue from all sources). Net revenue from operations for 2008 was adjusted to 2012 US dollars using the chained Consumer Price Index for all urban consumers.

Outcomes

For our primary analysis, the primary outcomes were publicly reported all‐cause 30‐day risk‐standardized mortality rates (RSMR) and readmission rates (RSRR) for AMI, CHF, and PNA aggregated over a 3‐year period. These measures were adjusted for key demographic and clinical characteristics available in Medicare data. CMS began publicly reporting 30‐day RSMR for AMI and CHF in June 2007, RSMR for PNA in June 2008, and RSRR for all 3 conditions in July 2009.[23, 24]

To assess whether public reporting had an effect on subsequent hospital financial performance, we conducted a companion analysis where the primary outcome of interest was change in hospital financial performance over time, using the same definitions of financial performance outlined above. For this companion analysis, publicly reported 30‐day RSMR and RSRR for AMI, CHF, and PNA were assessed as predictors of subsequent financial performance.

Hospital Characteristics

Hospital characteristics were ascertained from the OSHPD data. Safety‐net status was defined as hospitals with an annual Medicaid caseload (number of Medicaid discharges divided by the total number of discharges) 1 standard deviation above the mean Medicaid caseload, as defined in previous studies.[25]

Statistical Analyses

Effect of Baseline Financial Performance on Subsequent Publicly Reported Outcomes

To estimate the relationship between baseline hospital financial performance in 2008 and subsequent RSMR and RSRR for AMI, CHF, and PNA from 2008 to 2011, we used linear regression adjusted for the following hospital characteristics: teaching status, rural location, bed size, safety‐net status, ownership, Medicare caseload, and volume of cases reported for the respective outcome. We accounted for clustering of hospitals by ownership. We adjusted for hospital volume of reported cases for each condition given that the risk‐standardization models used by CMS shrink outcomes for small hospitals to the mean, and therefore do not account for a potential volume‐outcome relationship.[26] We conducted a sensitivity analysis excluding hospitals at the extremes of financial performance, defined as hospitals with extreme outlier values for each financial performance measure (eg, values more than 3 times the interquartile range above the first quartile or below the third quartile).[27] Nonlinearity of financial performance measures was assessed using restricted cubic splines. For ease of interpretation, we scaled the estimated change in RSMR and RSRR per $50 million increase in net revenue from operations, and graphed nonparametric relationships using restricted cubic splines.

Effect of Public Reporting on Subsequent Hospital Financial Performance

To assess whether public reporting had an effect on subsequent hospital financial performance, we conducted a companion hospital‐level difference‐in‐differences analysis to assess for differential changes in hospital financial performance between 2008 and 2012, stratified by tertiles of RSMR and RSRR rates from 2008 to 2011. This approach compares differences in an outcome of interest (hospital financial performance) within each group (where each group is a tertile of publicly reported rates of RSMR or RSRR), and then compares the difference in these differences between groups. Therefore, these analyses use each group as their own historical control and the opposite group as a concurrent control to account for potential secular trends. To conduct our difference‐in‐differences analysis, we compared the change in financial performance over time in the top tertile of hospitals to the change in financial performance over time in the bottom tertile of hospitals with respect to AMI, CHF, and PNA RSMR and RSRR. Our models therefore included year (2008 vs 2012), tertile of publicly reported rates for RSMR or RSRR, and the interaction between them as predictors, where the interaction was the difference‐in‐differences term and the primary predictor of interest. In addition to adjusting for hospital characteristics and accounting for clustering as mentioned above, we also included 3 separate interaction terms for year with bed size, safety‐net status, and Medicare caseload, to account for potential changes in the hospitals over time that may have independently influenced financial performance and publicly reported 30‐day measures. For sensitivity analyses, we repeated our difference‐in‐differences analyses excluding hospitals with a change in ownership and extreme outliers with respect to financial performance in 2008. We performed model diagnostics including assessment of functional form, linearity, normality, constant variance, and model misspecification. All analyses were conducted using Stata version 12.1 (StataCorp, College Station, TX). This study was deemed exempt from review by the UT Southwestern Medical Center institutional review board.

RESULTS

Among the 279 included hospitals (see Supporting Figure 1 in the online version of this article), 278 also had financial data available for 2012. In 2008, the median net revenue from operations was $1.6 million (interquartile range [IQR], $2.4 to $10.3 million), the median operating margin was 1.5% (IQR, 4.6% to 6%), and the median total margin was 2.5% (IQR, 2.2% to 7.5% (Table 1). The number of hospitals reporting each outcome, and median outcome rates, are shown in Table 2.

Hospital Characteristics and Financial Performance in 2008 and 2012
2008, n = 279 2012, n = 278
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation. *Medicaid caseload equivalent to 1 standard deviation above the mean (41.8% for 2008 and 42.1% for 2012). Operated by an investor‐individual, investor‐partnership, or investor‐corporation.

Hospital characteristics
Teaching, n (%) 28 (10.0) 28 (10.0)
Rural, n (%) 55 (19.7) 55 (19.7)
Bed size, n (%)
099 (small) 57 (20.4) 55 (19.8)
100299 (medium) 130 (46.6) 132 (47.5)
300 (large) 92 (33.0) 91 (32.7)
Safety‐net hospital, n (%)* 46 (16.5) 48 (17.3)
Hospital ownership, n (%)
City or county 15 (5.4) 16 (5.8)
District 42 (15.1) 39 (14.0)
Investor 66 (23.7) 66 (23.7)
Private nonprofit 156 (55.9) 157 (56.5)
Medicare caseload, mean % (SD) 41.6 (14.7) 43.6 (14.7)
Financial performance measures
Net revenue from operations, median $ in millions (IQR; range) 1.6 (2.4 to 10.3; 495.9 to 144.1) 3.2 (2.9 to 15.4; 396.2 to 276.8)
Operating margin, median % (IQR; range) 1.5 (4.6 to 6.8; 77.8 to 26.4) 2.3 (3.9 to 8.2; 134.8 to 21.1)
Total margin, median % (IQR; range) 2.5 (2.2 to 7.5; 101.0 to 26.3) 4.5 (0.7 to 9.8; 132.2 to 31.1)
Relationship Between Hospital Financial Performance and 30‐Day Mortality and Readmission Rates*
No. Median % (IQR) Adjusted % Change (95% CI) per $50 Million Increase in Net Revenue From Operations
Overall Extreme Outliers Excluded
  • NOTE: Abbreviations: CI, confidence interval; IQR, interquartile range. *Thirty‐day outcomes are risk standardized for age, sex, comorbidity count, and indicators of patient frailty.[3] Each outcome was modeled separately and adjusted for teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership type, Medicare caseload, and volume of cases reported for the respective outcome, accounting for clustering of hospitals by owner. Twenty‐three hospitals were identified as extreme outliers with respect to net revenue from operations (10 underperformers with net revenue <$49.4 million and 13 overperformers with net revenue >$52.1 million). There was a nonlinear and statistically significant relationship between net revenue from operations and readmission rate for myocardial infarction. Net revenue from operations was modeled as a cubic spline function. See Figure 1. The overall adjusted F statistic was 4.8 (P < 0.001). There was a nonlinear and statistically significant relationship between net revenue from operations and mortality rate for heart failure after exclusion of extreme outliers. Net revenue from operations was modeled as a cubic spline function. See Figure 1.The overall adjusted F statistic was 3.6 (P = 0.008).

Myocardial infarction
Mortality rate 211 15.2 (14.216.2) 0.07 (0.10 to 0.24) 0.63 (0.21 to 1.48)
Readmission rate 184 19.4 (18.520.2) Nonlinear 0.34 (1.17 to 0.50)
Congestive heart failure
Mortality rate 259 11.1 (10.112.1) 0.17 (0.01 to 0.35) Nonlinear
Readmission rate 264 24.5 (23.525.6) 0.07 (0.27 to 0.14) 0.45 (1.36 to 0.47)
Pneumonia
Mortality rate 268 11.6 (10.413.2) 0.17 (0.42 to 0.07) 0.35 (1.19 to 0.49)
Readmission rate 268 18.2 (17.319.1) 0.04 (0.20 to 0.11) 0.56 (1.27 to 0.16)

Relationship Between Financial Performance and Publicly Reported Outcomes

Acute Myocardial Infarction

We did not observe a consistent relationship between hospital financial performance and AMI mortality and readmission rates. In our overall adjusted analyses, net revenue from operations was not associated with mortality, but was significantly associated with a decrease in AMI readmissions among hospitals with net revenue from operations between approximately $5 million to $145 million (nonlinear relationship, F statistic = 4.8, P < 0.001 (Table 2, Figure 1A). However, after excluding 23 extreme outlying hospitals by net revenue from operations (10 underperformers with net revenue <$49.4 million and 13 overperformers with net revenue >$52.1 million), this relationship was no longer observed. Using operating margin instead of net revenue from operations as the measure of hospital financial performance, we observed a 0.2% increase in AMI mortality (95% confidence interval [CI]: 0.06%‐0.35%) (see Supporting Table 1 and Supporting Figure 2 in the online version of this article) for each 10% increase in operating margin, which persisted with the exclusion of 5 outlying hospitals by operating margin (all 5 were underperformers, with operating margins <38.6%). However, using total margin as the measure of financial performance, there was no significant relationship with either mortality or readmissions (see Supporting Table 2 and Supporting Figure 3 in the online version of this article).

Figure 1
Relationship between financial performance and 30‐day readmission and mortality. The open circles represent individual hospitals. The bold dashed line and the bold solid line are the unadjusted and adjusted cubic spline curves, respectively, representing the nonlinear relationship between net revenue from operations and each outcome. The shaded grey area represents the 95% confidence interval for the adjusted cubic spline curve. Thin vertical dashed lines represent median values for net revenue from operations. Multivariate models were adjusted for teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership, Medicare caseload, and volume of cases reported for the respective outcome, accounting for clustering of hospitals by owner. *Twenty‐three hospitals were identified as outliers with respect to net revenue from clinical operations (10 “underperformers” with net revenue <−$49.4 million and 13 “overperformers” with net revenue >$52.1 million.

Congestive Heart Failure

In our primary analyses, we did not observe a significant relationship between net revenue from operations and CHF mortality and readmission rates. However, after excluding 23 extreme outliers, increasing net revenue from operations was associated with a modest increase in CHF mortality among hospitals, with net revenue between approximately $35 million and $20 million (nonlinear relationship, F statistic = 3.6, P = 0.008 (Table 2, Figure 1B). Using alternate measures of financial performance, we observed a consistent relationship between increasing hospital financial performance and higher 30‐day CHF mortality rate. Using operating margin, we observed a slight increase in the mortality rate for CHF (0.26% increase in CHF RSMR for every 10% increase in operating margin) (95% CI: 0.07%‐0.45%) (see Supporting Table 1 and Supporting Figure 2 in the online version of this article), which persisted after the exclusion of 5 extreme outliers. Using total margin, we observed a significant but modest association between improved hospital financial performance and increased mortality rate for CHF (nonlinear relationship, F statistic = 2.9, P = 0.03) (see Supporting Table 2 and Supporting Figure 3 in the online version of this article), which persisted after the exclusion of 3 extreme outliers (0.32% increase in CHF RSMR for every 10% increase in total margin) (95% CI: 0.03%‐0.62%).

Pneumonia

Hospital financial performance (using net revenue, operating margin, or total margin) was not associated with 30‐day PNA mortality or readmission rates.

Relationship of Readmission and Mortality Rates on Subsequent Hospital Financial Performance

Compared to hospitals in the highest tertile of readmission and mortality rates (ie, those with the worst rates), hospitals in the lowest tertile of readmission and mortality rates (ie, those with the best rates) had a similar magnitude of increase in net revenue from operations from 2008 to 2012 (Table 3). The difference‐in‐differences analyses showed no relationship between readmission or mortality rates for AMI, CHF, and PNA and changes in net revenue from operations from 2008 to 2012 (difference‐in‐differences estimates ranged from $8.61 to $6.77 million, P > 0.3 for all). These results were robust to the exclusion of hospitals with a change in ownership and extreme outliers by net revenue from operations (data not reported).

Difference in the Differences in Financial Performance Between the Worst‐ and the Best‐Performing Hospitals
Outcome Tertile With Highest Outcome Rates (Worst Hospitals) Tertile With Lowest Outcome Rates (Best Hospitals) Difference in Net From Operations Differences Between Highest and Lowest Outcome Rate Tertiles, $ Million (95% CI) P
Outcome, Median % (IQR) Gain/Loss in Net Revenue From Operations From 2008 to 2012, $ Million* Outcome, Median % (IQR) Gain/Loss in Net Revenue from Operations From 2008 to 2012, $ Million*
  • NOTE: Abbreviations: AMI, acute myocardial infarction; CHF, congestive heart failure; CI, confidence interval; IQR, interquartile range; PNA, pneumonia. *Differences were calculated as net revenue from clinical operations in 2012 minus net revenue from clinical operations in 2008. Net revenue in 2008 was adjusted to 2012 US dollars using the chained Consumer Price Index for all urban consumers. Each outcome was modeled separately and adjusted for year, tertile of performance for the respective outcome, the interaction between year and tertile (difference‐in‐differences term), teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership type, Medicare caseload, volume of cases reported for the respective outcome, and interactions for year with bed size, safety‐net hospital status, and Medicare caseload, accounting for clustering of hospitals by owner.

AMI mortality 16.7 (16.217.4) +65.62 13.8 (13.314.2) +74.23 8.61 (27.95 to 10.73) 0.38
AMI readmit 20.7 (20.321.5) +38.62 18.3 (17.718.6) +31.85 +6.77 (13.24 to 26.77) 0.50
CHF mortality 13.0 (12.313.9) +45.66 9.6 (8.910.1) +48.60 2.94 (11.61 to 5.73) 0.50
CHF readmit 26.2 (25.726.9) +47.08 23.0 (22.323.5) +46.08 +0.99 (10.51 to 12.50) 0.87
PNA mortality 13.9 (13.314.7) +43.46 9.9 (9.310.4) +38.28 +5.18 (7.01 to 17.37) 0.40
PNA readmit 19.4 (19.120.1) +47.21 17.0 (16.517.3) +45.45 +1.76 (8.34 to 11.86) 0.73

DISCUSSION

Using audited financial data from California hospitals in 2008 and 2012, and CMS data on publicly reported outcomes from 2008 to 2011, we found no consistent relationship between hospital financial performance and publicly reported outcomes for AMI and PNA. However, better hospital financial performance was associated with a modest increase in 30‐day risk‐standardized CHF mortality rates, which was consistent across all 3 measures of hospital financial performance. Reassuringly, there was no difference in the change in net revenue from operations between 2008 and 2012 between hospitals in the highest and lowest tertiles of readmission and mortality rates for AMI, CHF, and PNA. In other words, hospitals with the lowest rates of 30‐day readmissions and mortality for AMI, CHF, and PNA did not experience a loss in net revenue from operations over time, compared to hospitals with the highest readmission and mortality rates.

Our study differs in several important ways from Ly et al., the only other study to our knowledge that investigated the relationship between hospital financial performance and outcomes for patients with AMI, CHF, and PNA.[19] First, outcomes in the Ly et al. study were ascertained in 2007, which preceded public reporting of outcomes. Second, the primary comparison was between hospitals in the bottom versus top decile of operating margin. Although Ly and colleagues also found no association between hospital financial performance and mortality rates for these 3 conditions, they found a significant absolute decrease of approximately 3% in readmission rates among hospitals in the top decile of operating margin versus those in bottom decile. However, readmission rates were comparable among the remaining 80% of hospitals, suggesting that these findings primarily reflected the influence of a few outlier hospitals. Third, the use of nonuniformly audited hospital financial data may have resulted in misclassification of financial performance. Our findings also differ from 2 previous studies that identified a modest association between improved hospital financial performance and decreased adverse patient safety events.[18, 20] However, publicly reported outcomes may not be fully representative of hospital quality and patient safety.[28, 29]

The limited association between hospital financial performance and publicly reported outcomes for AMI and PNA is noteworthy for several reasons. First, publicly reporting outcomes alone without concomitant changes to reimbursement may be inadequate to create strong financial incentives for hospital investment in quality improvement initiatives. Hospitals participating in both public reporting of outcomes and pay‐for‐performance have been shown to achieve greater improvements in outcomes than hospitals engaged only in public reporting.[30] Our time interval for ascertainment of outcomes preceded CMS implementation of the Hospital Readmissions Reduction Program (HRRP) in October 2012, which withholds up to 3% of Medicare hospital reimbursements for higher than expected mortality and readmission rates for AMI, CHF, and PNA. Once outcomes data become available for a 3‐year post‐HRRP implementation period, the impact of this combined approach can be assessed. Second, because adherence to many evidence‐based process measures for these conditions (ie, aspirin use in AMI) is already high, there may be a ceiling effect present that obviates the need for further hospital financial investment to optimize delivery of best practices.[31, 32] Third, hospitals themselves may contribute little to variation in mortality and readmission risk. Of the total variation in mortality and readmission rates among Texas Medicare beneficiaries, only about 1% is attributable to hospitals, whereas 42% to 56% of the variation is explained by differences in patient characteristics.[33, 34] Fourth, there is either low‐quality or insufficient evidence that transitional care interventions specifically targeted to patients with AMI or PNA result in better outcomes.[35] Thus, greater financial investment in hospital‐initiated and postdischarge transitional care interventions for these specific conditions may result in less than the desired effect. Lastly, many hospitalizations for these conditions are emergency hospitalizations that occur after patients present to the emergency department with unexpected and potentially life‐threatening symptoms. Thus, patients may not be able to incorporate the reputation or performance metrics of a hospital in their decisions for where they are hospitalized for AMI, CHF, or PNA despite the public reporting of outcomes.

Given the strong evidence that transitional care interventions reduce readmissions and mortality among patients hospitalized with CHF, we were surprised to find that improved hospital financial performance was associated with an increased risk‐adjusted CHF mortality rate.[36] This association held true for all 3 different measures of hospital financial performance, suggesting that this unexpected finding is unlikely to be the result of statistical chance, though potential reasons for this association remain unclear. One possibility is that the CMS model for CHF mortality may not adequately risk adjust for severity of illness.[37, 38] Thus, robust financial performance may be a marker for hospitals with more advanced heart failure services that care for more patients with severe illness.

Our findings should be interpreted in the context of certain limitations. Our study only included an analysis of outcomes for AMI, CHF, and PNA among older fee‐for‐service Medicare beneficiaries aggregated at the hospital level in California between 2008 and 2012, so generalizability to other populations, conditions, states, and time periods is uncertain. The observational design precludes a robust causal inference between financial performance and outcomes. For readmissions, rates were publicly reported for only the last 2 years of the 3‐year reporting period; thus, our findings may underestimate the association between hospital financial performance and publicly reported readmission rates.

CONCLUSION

There is no consistent relationship between hospital financial performance and subsequent publicly reported outcomes for AMI and PNA. However, for unclear reasons, hospitals with better financial performance had modestly higher CHF mortality rates. Given this limited association, public reporting of outcomes may have had less than the intended impact in motivating hospitals to invest in quality improvement. Additional financial incentives in addition to public reporting, such as readmissions penalties, may help motivate hospitals with robust financial performance to further improve outcomes. This would be a key area for future investigation once outcomes data are available for the 3‐year period following CMS implementation of readmissions penalties in 2012. Reassuringly, there was no association between low 30‐day mortality and readmissions rates and subsequent poor financial performance, suggesting that improved outcomes do not necessarily lead to loss of revenue.

Disclosures

Drs. Nguyen, Halm, and Makam were supported in part by the Agency for Healthcare Research and Quality University of Texas Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01). Drs. Nguyen and Makam received funding from the University of Texas Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). The study sponsors had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

Hospital care accounts for the single largest category of national healthcare expenditures, totaling $936.9 billion in 2013.[1] With ongoing scrutiny of US healthcare spending, hospitals are under increasing pressure to justify high costs and robust profits.[2] However, the dominant fee‐for‐service reimbursement model creates incentives for hospitals to prioritize high volume over high‐quality care to maximize profits.[3] Because hospitals may be reluctant to implement improvements if better quality is not accompanied by better payment or improved financial margins, an approach to stimulate quality improvement among hospitals has been to leverage consumer pressure through required public reporting of selected outcome metrics.[4, 5] Public reporting of outcomes is thought to influence hospital reputation; in turn, reputation affects patient perceptions and influences demand for hospital services, potentially enabling reputable hospitals to command higher prices for services to enhance hospital revenue.[6, 7]

Though improving outcomes is thought to reduce overall healthcare costs, it is unclear whether improving outcomes results in a hospital's financial return on investment.[4, 5, 8] Quality improvement can require substantial upfront investment, requiring that hospitals already have robust financial health to engage in such initiatives.[9, 10] Consequently, instead of stimulating broad efforts in quality improvement, public reporting may exacerbate existing disparities in hospital quality and finances, by rewarding already financially healthy hospitals, and by inadvertently penalizing hospitals without the means to invest in quality improvement.[11, 12, 13, 14, 15] Alternately, because fee‐for‐service remains the dominant reimbursement model for hospitals, loss of revenue through reducing readmissions may outweigh any financial gains from improved public reputation and result in worse overall financial performance, though robust evidence for this concern is lacking.[16, 17]

A small number of existing studies suggest a limited correlation between improved hospital financial performance and improved quality, patient safety, and lower readmission rates.[18, 19, 20] However, these studies had several limitations. They were conducted prior to public reporting of selected outcome metrics by the Centers for Medicare and Medicaid Services (CMS)[18, 19, 20]; used data from the Medicare Cost Report, which is not uniformly audited and thus prone to measurement error[19, 20]; used only relative measures of hospital financial performance (eg, operating margin), which do not capture the absolute amount of revenue potentially available for investment in quality improvement[18, 19]; or compared only hospitals at the extremes of financial performance, potentially exaggerating the magnitude of the relationship between hospital financial performance and quality outcomes.[19]

To address this gap in the literature, we sought to assess whether hospitals with robust financial performance have lower 30‐day risk‐standardized mortality and hospital readmission rates for acute myocardial infarction (AMI), congestive heart failure (CHF), and pneumonia (PNA). Given the concern that hospitals with the lowest mortality and readmission rates may experience a decrease in financial performance due to the lower volume of hospitalizations, we also assessed whether hospitals with the lowest readmission and mortality rates had a differential change in financial performance over time compared to hospitals with the highest rates.

METHODS

Data Sources and Study Population

This was an observational study using audited financial data from the 2008 and 2012 Hospital Annual Financial Data Files from the Office of Statewide Health Planning and Development (OSHPD) in the state of California, merged with data on outcome measures publicly reported by CMS via the Hospital Compare website for July 1, 2008 to June 30, 2011.[21, 22] We included all general acute care hospitals with available OSHPD data in 2008 and at least 1 publicly reported outcome from 2008 to 2011. We excluded hospitals without 1 year of audited financial data for 2008 and hospitals that closed during 2008 to 2011.

Measures of Financial Performance

Because we hypothesized that the absolute amount of revenue generated from clinical operations would influence investment in quality improvement programs more so than relative changes in revenue,[20] we used net revenue from operations (total operating revenue minus total operating expense) as our primary measure of hospital financial performance. We also performed 2 companion analyses using 2 commonly reported relative measures of financial performanceoperating margin (net revenue from operations divided by total operating revenue) and total margin (net total revenue divided by total revenue from all sources). Net revenue from operations for 2008 was adjusted to 2012 US dollars using the chained Consumer Price Index for all urban consumers.

Outcomes

For our primary analysis, the primary outcomes were publicly reported all‐cause 30‐day risk‐standardized mortality rates (RSMR) and readmission rates (RSRR) for AMI, CHF, and PNA aggregated over a 3‐year period. These measures were adjusted for key demographic and clinical characteristics available in Medicare data. CMS began publicly reporting 30‐day RSMR for AMI and CHF in June 2007, RSMR for PNA in June 2008, and RSRR for all 3 conditions in July 2009.[23, 24]

To assess whether public reporting had an effect on subsequent hospital financial performance, we conducted a companion analysis where the primary outcome of interest was change in hospital financial performance over time, using the same definitions of financial performance outlined above. For this companion analysis, publicly reported 30‐day RSMR and RSRR for AMI, CHF, and PNA were assessed as predictors of subsequent financial performance.

Hospital Characteristics

Hospital characteristics were ascertained from the OSHPD data. Safety‐net status was defined as hospitals with an annual Medicaid caseload (number of Medicaid discharges divided by the total number of discharges) 1 standard deviation above the mean Medicaid caseload, as defined in previous studies.[25]

Statistical Analyses

Effect of Baseline Financial Performance on Subsequent Publicly Reported Outcomes

To estimate the relationship between baseline hospital financial performance in 2008 and subsequent RSMR and RSRR for AMI, CHF, and PNA from 2008 to 2011, we used linear regression adjusted for the following hospital characteristics: teaching status, rural location, bed size, safety‐net status, ownership, Medicare caseload, and volume of cases reported for the respective outcome. We accounted for clustering of hospitals by ownership. We adjusted for hospital volume of reported cases for each condition given that the risk‐standardization models used by CMS shrink outcomes for small hospitals to the mean, and therefore do not account for a potential volume‐outcome relationship.[26] We conducted a sensitivity analysis excluding hospitals at the extremes of financial performance, defined as hospitals with extreme outlier values for each financial performance measure (eg, values more than 3 times the interquartile range above the first quartile or below the third quartile).[27] Nonlinearity of financial performance measures was assessed using restricted cubic splines. For ease of interpretation, we scaled the estimated change in RSMR and RSRR per $50 million increase in net revenue from operations, and graphed nonparametric relationships using restricted cubic splines.

Effect of Public Reporting on Subsequent Hospital Financial Performance

To assess whether public reporting had an effect on subsequent hospital financial performance, we conducted a companion hospital‐level difference‐in‐differences analysis to assess for differential changes in hospital financial performance between 2008 and 2012, stratified by tertiles of RSMR and RSRR rates from 2008 to 2011. This approach compares differences in an outcome of interest (hospital financial performance) within each group (where each group is a tertile of publicly reported rates of RSMR or RSRR), and then compares the difference in these differences between groups. Therefore, these analyses use each group as their own historical control and the opposite group as a concurrent control to account for potential secular trends. To conduct our difference‐in‐differences analysis, we compared the change in financial performance over time in the top tertile of hospitals to the change in financial performance over time in the bottom tertile of hospitals with respect to AMI, CHF, and PNA RSMR and RSRR. Our models therefore included year (2008 vs 2012), tertile of publicly reported rates for RSMR or RSRR, and the interaction between them as predictors, where the interaction was the difference‐in‐differences term and the primary predictor of interest. In addition to adjusting for hospital characteristics and accounting for clustering as mentioned above, we also included 3 separate interaction terms for year with bed size, safety‐net status, and Medicare caseload, to account for potential changes in the hospitals over time that may have independently influenced financial performance and publicly reported 30‐day measures. For sensitivity analyses, we repeated our difference‐in‐differences analyses excluding hospitals with a change in ownership and extreme outliers with respect to financial performance in 2008. We performed model diagnostics including assessment of functional form, linearity, normality, constant variance, and model misspecification. All analyses were conducted using Stata version 12.1 (StataCorp, College Station, TX). This study was deemed exempt from review by the UT Southwestern Medical Center institutional review board.

RESULTS

Among the 279 included hospitals (see Supporting Figure 1 in the online version of this article), 278 also had financial data available for 2012. In 2008, the median net revenue from operations was $1.6 million (interquartile range [IQR], $2.4 to $10.3 million), the median operating margin was 1.5% (IQR, 4.6% to 6%), and the median total margin was 2.5% (IQR, 2.2% to 7.5% (Table 1). The number of hospitals reporting each outcome, and median outcome rates, are shown in Table 2.

Hospital Characteristics and Financial Performance in 2008 and 2012
2008, n = 279 2012, n = 278
  • NOTE: Abbreviations: IQR, interquartile range; SD, standard deviation. *Medicaid caseload equivalent to 1 standard deviation above the mean (41.8% for 2008 and 42.1% for 2012). Operated by an investor‐individual, investor‐partnership, or investor‐corporation.

Hospital characteristics
Teaching, n (%) 28 (10.0) 28 (10.0)
Rural, n (%) 55 (19.7) 55 (19.7)
Bed size, n (%)
099 (small) 57 (20.4) 55 (19.8)
100299 (medium) 130 (46.6) 132 (47.5)
300 (large) 92 (33.0) 91 (32.7)
Safety‐net hospital, n (%)* 46 (16.5) 48 (17.3)
Hospital ownership, n (%)
City or county 15 (5.4) 16 (5.8)
District 42 (15.1) 39 (14.0)
Investor 66 (23.7) 66 (23.7)
Private nonprofit 156 (55.9) 157 (56.5)
Medicare caseload, mean % (SD) 41.6 (14.7) 43.6 (14.7)
Financial performance measures
Net revenue from operations, median $ in millions (IQR; range) 1.6 (2.4 to 10.3; 495.9 to 144.1) 3.2 (2.9 to 15.4; 396.2 to 276.8)
Operating margin, median % (IQR; range) 1.5 (4.6 to 6.8; 77.8 to 26.4) 2.3 (3.9 to 8.2; 134.8 to 21.1)
Total margin, median % (IQR; range) 2.5 (2.2 to 7.5; 101.0 to 26.3) 4.5 (0.7 to 9.8; 132.2 to 31.1)
Relationship Between Hospital Financial Performance and 30‐Day Mortality and Readmission Rates*
No. Median % (IQR) Adjusted % Change (95% CI) per $50 Million Increase in Net Revenue From Operations
Overall Extreme Outliers Excluded
  • NOTE: Abbreviations: CI, confidence interval; IQR, interquartile range. *Thirty‐day outcomes are risk standardized for age, sex, comorbidity count, and indicators of patient frailty.[3] Each outcome was modeled separately and adjusted for teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership type, Medicare caseload, and volume of cases reported for the respective outcome, accounting for clustering of hospitals by owner. Twenty‐three hospitals were identified as extreme outliers with respect to net revenue from operations (10 underperformers with net revenue <$49.4 million and 13 overperformers with net revenue >$52.1 million). There was a nonlinear and statistically significant relationship between net revenue from operations and readmission rate for myocardial infarction. Net revenue from operations was modeled as a cubic spline function. See Figure 1. The overall adjusted F statistic was 4.8 (P < 0.001). There was a nonlinear and statistically significant relationship between net revenue from operations and mortality rate for heart failure after exclusion of extreme outliers. Net revenue from operations was modeled as a cubic spline function. See Figure 1.The overall adjusted F statistic was 3.6 (P = 0.008).

Myocardial infarction
Mortality rate 211 15.2 (14.216.2) 0.07 (0.10 to 0.24) 0.63 (0.21 to 1.48)
Readmission rate 184 19.4 (18.520.2) Nonlinear 0.34 (1.17 to 0.50)
Congestive heart failure
Mortality rate 259 11.1 (10.112.1) 0.17 (0.01 to 0.35) Nonlinear
Readmission rate 264 24.5 (23.525.6) 0.07 (0.27 to 0.14) 0.45 (1.36 to 0.47)
Pneumonia
Mortality rate 268 11.6 (10.413.2) 0.17 (0.42 to 0.07) 0.35 (1.19 to 0.49)
Readmission rate 268 18.2 (17.319.1) 0.04 (0.20 to 0.11) 0.56 (1.27 to 0.16)

Relationship Between Financial Performance and Publicly Reported Outcomes

Acute Myocardial Infarction

We did not observe a consistent relationship between hospital financial performance and AMI mortality and readmission rates. In our overall adjusted analyses, net revenue from operations was not associated with mortality, but was significantly associated with a decrease in AMI readmissions among hospitals with net revenue from operations between approximately $5 million to $145 million (nonlinear relationship, F statistic = 4.8, P < 0.001 (Table 2, Figure 1A). However, after excluding 23 extreme outlying hospitals by net revenue from operations (10 underperformers with net revenue <$49.4 million and 13 overperformers with net revenue >$52.1 million), this relationship was no longer observed. Using operating margin instead of net revenue from operations as the measure of hospital financial performance, we observed a 0.2% increase in AMI mortality (95% confidence interval [CI]: 0.06%‐0.35%) (see Supporting Table 1 and Supporting Figure 2 in the online version of this article) for each 10% increase in operating margin, which persisted with the exclusion of 5 outlying hospitals by operating margin (all 5 were underperformers, with operating margins <38.6%). However, using total margin as the measure of financial performance, there was no significant relationship with either mortality or readmissions (see Supporting Table 2 and Supporting Figure 3 in the online version of this article).

Figure 1
Relationship between financial performance and 30‐day readmission and mortality. The open circles represent individual hospitals. The bold dashed line and the bold solid line are the unadjusted and adjusted cubic spline curves, respectively, representing the nonlinear relationship between net revenue from operations and each outcome. The shaded grey area represents the 95% confidence interval for the adjusted cubic spline curve. Thin vertical dashed lines represent median values for net revenue from operations. Multivariate models were adjusted for teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership, Medicare caseload, and volume of cases reported for the respective outcome, accounting for clustering of hospitals by owner. *Twenty‐three hospitals were identified as outliers with respect to net revenue from clinical operations (10 “underperformers” with net revenue <−$49.4 million and 13 “overperformers” with net revenue >$52.1 million.

Congestive Heart Failure

In our primary analyses, we did not observe a significant relationship between net revenue from operations and CHF mortality and readmission rates. However, after excluding 23 extreme outliers, increasing net revenue from operations was associated with a modest increase in CHF mortality among hospitals, with net revenue between approximately $35 million and $20 million (nonlinear relationship, F statistic = 3.6, P = 0.008 (Table 2, Figure 1B). Using alternate measures of financial performance, we observed a consistent relationship between increasing hospital financial performance and higher 30‐day CHF mortality rate. Using operating margin, we observed a slight increase in the mortality rate for CHF (0.26% increase in CHF RSMR for every 10% increase in operating margin) (95% CI: 0.07%‐0.45%) (see Supporting Table 1 and Supporting Figure 2 in the online version of this article), which persisted after the exclusion of 5 extreme outliers. Using total margin, we observed a significant but modest association between improved hospital financial performance and increased mortality rate for CHF (nonlinear relationship, F statistic = 2.9, P = 0.03) (see Supporting Table 2 and Supporting Figure 3 in the online version of this article), which persisted after the exclusion of 3 extreme outliers (0.32% increase in CHF RSMR for every 10% increase in total margin) (95% CI: 0.03%‐0.62%).

Pneumonia

Hospital financial performance (using net revenue, operating margin, or total margin) was not associated with 30‐day PNA mortality or readmission rates.

Relationship of Readmission and Mortality Rates on Subsequent Hospital Financial Performance

Compared to hospitals in the highest tertile of readmission and mortality rates (ie, those with the worst rates), hospitals in the lowest tertile of readmission and mortality rates (ie, those with the best rates) had a similar magnitude of increase in net revenue from operations from 2008 to 2012 (Table 3). The difference‐in‐differences analyses showed no relationship between readmission or mortality rates for AMI, CHF, and PNA and changes in net revenue from operations from 2008 to 2012 (difference‐in‐differences estimates ranged from $8.61 to $6.77 million, P > 0.3 for all). These results were robust to the exclusion of hospitals with a change in ownership and extreme outliers by net revenue from operations (data not reported).

Difference in the Differences in Financial Performance Between the Worst‐ and the Best‐Performing Hospitals
Outcome Tertile With Highest Outcome Rates (Worst Hospitals) Tertile With Lowest Outcome Rates (Best Hospitals) Difference in Net From Operations Differences Between Highest and Lowest Outcome Rate Tertiles, $ Million (95% CI) P
Outcome, Median % (IQR) Gain/Loss in Net Revenue From Operations From 2008 to 2012, $ Million* Outcome, Median % (IQR) Gain/Loss in Net Revenue from Operations From 2008 to 2012, $ Million*
  • NOTE: Abbreviations: AMI, acute myocardial infarction; CHF, congestive heart failure; CI, confidence interval; IQR, interquartile range; PNA, pneumonia. *Differences were calculated as net revenue from clinical operations in 2012 minus net revenue from clinical operations in 2008. Net revenue in 2008 was adjusted to 2012 US dollars using the chained Consumer Price Index for all urban consumers. Each outcome was modeled separately and adjusted for year, tertile of performance for the respective outcome, the interaction between year and tertile (difference‐in‐differences term), teaching status, metropolitan status (urban vs rural), bed size, safety‐net hospital status, hospital ownership type, Medicare caseload, volume of cases reported for the respective outcome, and interactions for year with bed size, safety‐net hospital status, and Medicare caseload, accounting for clustering of hospitals by owner.

AMI mortality 16.7 (16.217.4) +65.62 13.8 (13.314.2) +74.23 8.61 (27.95 to 10.73) 0.38
AMI readmit 20.7 (20.321.5) +38.62 18.3 (17.718.6) +31.85 +6.77 (13.24 to 26.77) 0.50
CHF mortality 13.0 (12.313.9) +45.66 9.6 (8.910.1) +48.60 2.94 (11.61 to 5.73) 0.50
CHF readmit 26.2 (25.726.9) +47.08 23.0 (22.323.5) +46.08 +0.99 (10.51 to 12.50) 0.87
PNA mortality 13.9 (13.314.7) +43.46 9.9 (9.310.4) +38.28 +5.18 (7.01 to 17.37) 0.40
PNA readmit 19.4 (19.120.1) +47.21 17.0 (16.517.3) +45.45 +1.76 (8.34 to 11.86) 0.73

DISCUSSION

Using audited financial data from California hospitals in 2008 and 2012, and CMS data on publicly reported outcomes from 2008 to 2011, we found no consistent relationship between hospital financial performance and publicly reported outcomes for AMI and PNA. However, better hospital financial performance was associated with a modest increase in 30‐day risk‐standardized CHF mortality rates, which was consistent across all 3 measures of hospital financial performance. Reassuringly, there was no difference in the change in net revenue from operations between 2008 and 2012 between hospitals in the highest and lowest tertiles of readmission and mortality rates for AMI, CHF, and PNA. In other words, hospitals with the lowest rates of 30‐day readmissions and mortality for AMI, CHF, and PNA did not experience a loss in net revenue from operations over time, compared to hospitals with the highest readmission and mortality rates.

Our study differs in several important ways from Ly et al., the only other study to our knowledge that investigated the relationship between hospital financial performance and outcomes for patients with AMI, CHF, and PNA.[19] First, outcomes in the Ly et al. study were ascertained in 2007, which preceded public reporting of outcomes. Second, the primary comparison was between hospitals in the bottom versus top decile of operating margin. Although Ly and colleagues also found no association between hospital financial performance and mortality rates for these 3 conditions, they found a significant absolute decrease of approximately 3% in readmission rates among hospitals in the top decile of operating margin versus those in bottom decile. However, readmission rates were comparable among the remaining 80% of hospitals, suggesting that these findings primarily reflected the influence of a few outlier hospitals. Third, the use of nonuniformly audited hospital financial data may have resulted in misclassification of financial performance. Our findings also differ from 2 previous studies that identified a modest association between improved hospital financial performance and decreased adverse patient safety events.[18, 20] However, publicly reported outcomes may not be fully representative of hospital quality and patient safety.[28, 29]

The limited association between hospital financial performance and publicly reported outcomes for AMI and PNA is noteworthy for several reasons. First, publicly reporting outcomes alone without concomitant changes to reimbursement may be inadequate to create strong financial incentives for hospital investment in quality improvement initiatives. Hospitals participating in both public reporting of outcomes and pay‐for‐performance have been shown to achieve greater improvements in outcomes than hospitals engaged only in public reporting.[30] Our time interval for ascertainment of outcomes preceded CMS implementation of the Hospital Readmissions Reduction Program (HRRP) in October 2012, which withholds up to 3% of Medicare hospital reimbursements for higher than expected mortality and readmission rates for AMI, CHF, and PNA. Once outcomes data become available for a 3‐year post‐HRRP implementation period, the impact of this combined approach can be assessed. Second, because adherence to many evidence‐based process measures for these conditions (ie, aspirin use in AMI) is already high, there may be a ceiling effect present that obviates the need for further hospital financial investment to optimize delivery of best practices.[31, 32] Third, hospitals themselves may contribute little to variation in mortality and readmission risk. Of the total variation in mortality and readmission rates among Texas Medicare beneficiaries, only about 1% is attributable to hospitals, whereas 42% to 56% of the variation is explained by differences in patient characteristics.[33, 34] Fourth, there is either low‐quality or insufficient evidence that transitional care interventions specifically targeted to patients with AMI or PNA result in better outcomes.[35] Thus, greater financial investment in hospital‐initiated and postdischarge transitional care interventions for these specific conditions may result in less than the desired effect. Lastly, many hospitalizations for these conditions are emergency hospitalizations that occur after patients present to the emergency department with unexpected and potentially life‐threatening symptoms. Thus, patients may not be able to incorporate the reputation or performance metrics of a hospital in their decisions for where they are hospitalized for AMI, CHF, or PNA despite the public reporting of outcomes.

Given the strong evidence that transitional care interventions reduce readmissions and mortality among patients hospitalized with CHF, we were surprised to find that improved hospital financial performance was associated with an increased risk‐adjusted CHF mortality rate.[36] This association held true for all 3 different measures of hospital financial performance, suggesting that this unexpected finding is unlikely to be the result of statistical chance, though potential reasons for this association remain unclear. One possibility is that the CMS model for CHF mortality may not adequately risk adjust for severity of illness.[37, 38] Thus, robust financial performance may be a marker for hospitals with more advanced heart failure services that care for more patients with severe illness.

Our findings should be interpreted in the context of certain limitations. Our study only included an analysis of outcomes for AMI, CHF, and PNA among older fee‐for‐service Medicare beneficiaries aggregated at the hospital level in California between 2008 and 2012, so generalizability to other populations, conditions, states, and time periods is uncertain. The observational design precludes a robust causal inference between financial performance and outcomes. For readmissions, rates were publicly reported for only the last 2 years of the 3‐year reporting period; thus, our findings may underestimate the association between hospital financial performance and publicly reported readmission rates.

CONCLUSION

There is no consistent relationship between hospital financial performance and subsequent publicly reported outcomes for AMI and PNA. However, for unclear reasons, hospitals with better financial performance had modestly higher CHF mortality rates. Given this limited association, public reporting of outcomes may have had less than the intended impact in motivating hospitals to invest in quality improvement. Additional financial incentives in addition to public reporting, such as readmissions penalties, may help motivate hospitals with robust financial performance to further improve outcomes. This would be a key area for future investigation once outcomes data are available for the 3‐year period following CMS implementation of readmissions penalties in 2012. Reassuringly, there was no association between low 30‐day mortality and readmissions rates and subsequent poor financial performance, suggesting that improved outcomes do not necessarily lead to loss of revenue.

Disclosures

Drs. Nguyen, Halm, and Makam were supported in part by the Agency for Healthcare Research and Quality University of Texas Southwestern Center for Patient‐Centered Outcomes Research (1R24HS022418‐01). Drs. Nguyen and Makam received funding from the University of Texas Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). The study sponsors had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The authors have no conflicts of interest to disclose.

References
  1. Centers for Medicare and Medicaid Services. Office of the Actuary. National Health Statistics Group. National Healthcare Expenditures Data. Baltimore, MD; 2013. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical.html. Accessed February 16, 2016.
  2. Brill S. Bitter pill: why medical bills are killing us. Time Magazine. February 20, 2013:1655.
  3. Ginsburg PB. Fee‐for‐service will remain a feature of major payment reforms, requiring more changes in Medicare physician payment. Health Aff (Millwood). 2012;31(9):19771983.
  4. Leatherman S, Berwick D, Iles D, et al. The business case for quality: case studies and an analysis. Health Aff (Millwood). 2003;22(2):1730.
  5. Marshall MN, Shekelle PG, Davies HT, Smith PC. Public reporting on quality in the United States and the United Kingdom. Health Aff (Millwood). 2003;22(3):134148.
  6. Swensen SJ, Dilling JA, Mc Carty PM, Bolton JW, Harper CM. The business case for health‐care quality improvement. J Patient Saf. 2013;9(1):4452.
  7. Hibbard JH, Stockard J, Tusler M. Hospital performance reports: impact on quality, market share, and reputation. Health Aff (Millwood). 2005;24(4):11501160.
  8. Rauh SS, Wadsworth EB, Weeks WB, Weinstein JN. The savings illusion—why clinical quality improvement fails to deliver bottom‐line results. N Engl J Med. 2011;365(26):e48.
  9. Meyer JA, Silow‐Carroll S, Kutyla T, Stepnick LS, Rybowski LS. Hospital Quality: Ingredients for Success—Overview and Lessons Learned. New York, NY: Commonwealth Fund; 2004.
  10. Silow‐Carroll S, Alteras T, Meyer JA. Hospital Quality Improvement: Strategies and Lessons from U.S. Hospitals. New York, NY: Commonwealth Fund; 2007.
  11. Bazzoli GJ, Clement JP, Lindrooth RC, et al. Hospital financial condition and operational decisions related to the quality of hospital care. Med Care Res Rev. 2007;64(2):148168.
  12. Casalino LP, Elster A, Eisenberg A, Lewis E, Montgomery J, Ramos D. Will pay‐for‐performance and quality reporting affect health care disparities? Health Aff (Millwood). 2007;26(3):w405w414.
  13. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety‐net and non‐safety‐net hospitals. JAMA. 2008;299(18):21802187.
  14. Bhalla R, Kalkut G. Could Medicare readmission policy exacerbate health care system inequity? Ann Intern Med. 2010;152(2):114117.
  15. Hernandez AF, Curtis LH. Minding the gap between efforts to reduce readmissions and disparities. JAMA. 2011;305(7):715716.
  16. Terry DF, Moisuk S. Medicare Health Support Pilot Program. N Engl J Med. 2012;366(7):666; author reply 667–668.
  17. Fontanarosa PB, McNutt RA. Revisiting hospital readmissions. JAMA. 2013;309(4):398400.
  18. Encinosa WE, Bernard DM. Hospital finances and patient safety outcomes. Inquiry. 2005;42(1):6072.
  19. Ly DP, Jha AK, Epstein AM. The association between hospital margins, quality of care, and closure or other change in operating status. J Gen Intern Med. 2011;26(11):12911296.
  20. Bazzoli GJ, Chen HF, Zhao M, Lindrooth RC. Hospital financial condition and the quality of patient care. Health Econ. 2008;17(8):977995.
  21. State of California Office of Statewide Health Planning and Development. Healthcare Information Division. Annual financial data. Available at: http://www.oshpd.ca.gov/HID/Products/Hospitals/AnnFinanData/PivotProfles/default.asp. Accessed June 23, 2015.
  22. Centers for Medicare 4(1):1113.
  23. Ross JS, Cha SS, Epstein AJ, et al. Quality of care for acute myocardial infarction at urban safety‐net hospitals. Health Aff (Millwood). 2007;26(1):238248.
  24. Silber JH, Rosenbaum PR, Brachet TJ, et al. The Hospital Compare mortality model and the volume‐outcome relationship. Health Serv Res. 2010;45(5 Pt 1):11481167.
  25. Tukey J. Exploratory Data Analysis. Boston, MA: Addison‐Wesley; 1977.
  26. Press MJ, Scanlon DP, Ryan AM, et al. Limits of readmission rates in measuring hospital quality suggest the need for added metrics. Health Aff (Millwood). 2013;32(6):10831091.
  27. Stefan MS, Pekow PS, Nsa W, et al. Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377385.
  28. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486496.
  29. Spatz ES, Sheth SD, Gosch KL, et al. Usual source of care and outcomes following acute myocardial infarction. J Gen Intern Med. 2014;29(6):862869.
  30. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood). 2010;29(7):13191324.
  31. Goodwin JS, Lin YL, Singh S, Kuo YF. Variation in length of stay and outcomes among hospitalized patients attributable to hospitals and hospitalists. J Gen Intern Med. 2013;28(3):370376.
  32. Singh S, Lin YL, Kuo YF, Nattinger AB, Goodwin JS. Variation in the risk of readmission among hospitals: the relative contribution of patient, hospital and inpatient provider characteristics. J Gen Intern Med. 2014;29(4):572578.
  33. Prvu Bettger J, Alexander KP, Dolor RJ, et al. Transitional care after hospitalization for acute stroke or myocardial infarction: a systematic review. Ann Intern Med. 2012;157(6):407416.
  34. Jha AK, Orav EJ, Li Z, Epstein AM. The inverse relationship between mortality rates and performance in the Hospital Quality Alliance measures. Health Aff (Millwood). 2007;26(4):11041110.
  35. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  36. Fuller RL, Atkinson G, Hughes JS. Indications of biased risk adjustment in the hospital readmission reduction program. J Ambul Care Manage. 2015;38(1):3947.
References
  1. Centers for Medicare and Medicaid Services. Office of the Actuary. National Health Statistics Group. National Healthcare Expenditures Data. Baltimore, MD; 2013. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical.html. Accessed February 16, 2016.
  2. Brill S. Bitter pill: why medical bills are killing us. Time Magazine. February 20, 2013:1655.
  3. Ginsburg PB. Fee‐for‐service will remain a feature of major payment reforms, requiring more changes in Medicare physician payment. Health Aff (Millwood). 2012;31(9):19771983.
  4. Leatherman S, Berwick D, Iles D, et al. The business case for quality: case studies and an analysis. Health Aff (Millwood). 2003;22(2):1730.
  5. Marshall MN, Shekelle PG, Davies HT, Smith PC. Public reporting on quality in the United States and the United Kingdom. Health Aff (Millwood). 2003;22(3):134148.
  6. Swensen SJ, Dilling JA, Mc Carty PM, Bolton JW, Harper CM. The business case for health‐care quality improvement. J Patient Saf. 2013;9(1):4452.
  7. Hibbard JH, Stockard J, Tusler M. Hospital performance reports: impact on quality, market share, and reputation. Health Aff (Millwood). 2005;24(4):11501160.
  8. Rauh SS, Wadsworth EB, Weeks WB, Weinstein JN. The savings illusion—why clinical quality improvement fails to deliver bottom‐line results. N Engl J Med. 2011;365(26):e48.
  9. Meyer JA, Silow‐Carroll S, Kutyla T, Stepnick LS, Rybowski LS. Hospital Quality: Ingredients for Success—Overview and Lessons Learned. New York, NY: Commonwealth Fund; 2004.
  10. Silow‐Carroll S, Alteras T, Meyer JA. Hospital Quality Improvement: Strategies and Lessons from U.S. Hospitals. New York, NY: Commonwealth Fund; 2007.
  11. Bazzoli GJ, Clement JP, Lindrooth RC, et al. Hospital financial condition and operational decisions related to the quality of hospital care. Med Care Res Rev. 2007;64(2):148168.
  12. Casalino LP, Elster A, Eisenberg A, Lewis E, Montgomery J, Ramos D. Will pay‐for‐performance and quality reporting affect health care disparities? Health Aff (Millwood). 2007;26(3):w405w414.
  13. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety‐net and non‐safety‐net hospitals. JAMA. 2008;299(18):21802187.
  14. Bhalla R, Kalkut G. Could Medicare readmission policy exacerbate health care system inequity? Ann Intern Med. 2010;152(2):114117.
  15. Hernandez AF, Curtis LH. Minding the gap between efforts to reduce readmissions and disparities. JAMA. 2011;305(7):715716.
  16. Terry DF, Moisuk S. Medicare Health Support Pilot Program. N Engl J Med. 2012;366(7):666; author reply 667–668.
  17. Fontanarosa PB, McNutt RA. Revisiting hospital readmissions. JAMA. 2013;309(4):398400.
  18. Encinosa WE, Bernard DM. Hospital finances and patient safety outcomes. Inquiry. 2005;42(1):6072.
  19. Ly DP, Jha AK, Epstein AM. The association between hospital margins, quality of care, and closure or other change in operating status. J Gen Intern Med. 2011;26(11):12911296.
  20. Bazzoli GJ, Chen HF, Zhao M, Lindrooth RC. Hospital financial condition and the quality of patient care. Health Econ. 2008;17(8):977995.
  21. State of California Office of Statewide Health Planning and Development. Healthcare Information Division. Annual financial data. Available at: http://www.oshpd.ca.gov/HID/Products/Hospitals/AnnFinanData/PivotProfles/default.asp. Accessed June 23, 2015.
  22. Centers for Medicare 4(1):1113.
  23. Ross JS, Cha SS, Epstein AJ, et al. Quality of care for acute myocardial infarction at urban safety‐net hospitals. Health Aff (Millwood). 2007;26(1):238248.
  24. Silber JH, Rosenbaum PR, Brachet TJ, et al. The Hospital Compare mortality model and the volume‐outcome relationship. Health Serv Res. 2010;45(5 Pt 1):11481167.
  25. Tukey J. Exploratory Data Analysis. Boston, MA: Addison‐Wesley; 1977.
  26. Press MJ, Scanlon DP, Ryan AM, et al. Limits of readmission rates in measuring hospital quality suggest the need for added metrics. Health Aff (Millwood). 2013;32(6):10831091.
  27. Stefan MS, Pekow PS, Nsa W, et al. Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377385.
  28. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486496.
  29. Spatz ES, Sheth SD, Gosch KL, et al. Usual source of care and outcomes following acute myocardial infarction. J Gen Intern Med. 2014;29(6):862869.
  30. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood). 2010;29(7):13191324.
  31. Goodwin JS, Lin YL, Singh S, Kuo YF. Variation in length of stay and outcomes among hospitalized patients attributable to hospitals and hospitalists. J Gen Intern Med. 2013;28(3):370376.
  32. Singh S, Lin YL, Kuo YF, Nattinger AB, Goodwin JS. Variation in the risk of readmission among hospitals: the relative contribution of patient, hospital and inpatient provider characteristics. J Gen Intern Med. 2014;29(4):572578.
  33. Prvu Bettger J, Alexander KP, Dolor RJ, et al. Transitional care after hospitalization for acute stroke or myocardial infarction: a systematic review. Ann Intern Med. 2012;157(6):407416.
  34. Jha AK, Orav EJ, Li Z, Epstein AM. The inverse relationship between mortality rates and performance in the Hospital Quality Alliance measures. Health Aff (Millwood). 2007;26(4):11041110.
  35. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  36. Fuller RL, Atkinson G, Hughes JS. Indications of biased risk adjustment in the hospital readmission reduction program. J Ambul Care Manage. 2015;38(1):3947.
Issue
Journal of Hospital Medicine - 11(7)
Issue
Journal of Hospital Medicine - 11(7)
Page Number
481-488
Page Number
481-488
Article Type
Display Headline
Relationship between hospital financial performance and publicly reported outcomes
Display Headline
Relationship between hospital financial performance and publicly reported outcomes
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Oanh Kieu Nguyen, MD, 5323 Harry Hines Blvd., Dallas, Texas 75390‐9169; Telephone: 214‐648‐3135; Fax: 214‐648‐3232; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Identifying an Idle Line for Its Removal

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Can the identification of an idle line facilitate its removal? A comparison between a proposed guideline and clinical practice

Infections acquired in the hospital are termed healthcare‐associated infections (HAIs) and include central lineassociated blood stream infections (CLABSIs). Among HAIs, CLABSIs cause the highest number of preventable deaths.[1] Central venous catheters (CVCs) or central lines are commonly used in the hospital.[2] Each year their use is linked to 250,000 cases of CLABSIs in the United States.[3] Some CLABSIs may be prevented by the prompt removal of the line.[4] However, CVCs are often retained after their clinical indication has lapsed and are then referred to as idle lines.[5, 6] In this work, we propose and theoretically test a guideline to facilitate the safe removal of an idle line by observing the agreement and disagreement between actual practice and the proposed guideline.

METHODS

Setting

This work was conducted at a large, urban, tertiary care, academic health center in the United States as a collaborative effort to improve quality at our institution.[7]

Design and Patients

The reports linked with the electronic medical records at our institution include a daily, ward‐by‐ward listing of patients who have access other than a peripheral line in place. This central line dashboard accesses the information on intravenous access charted by bedside nurses to create a list of patients on every ward who have any kind of central access. Temporary central venous lines (CVLs), peripherally inserted central catheters (PICCs), ports, and dialysis catheters are all included. The unit charge nurses and managers use this dashboard to facilitate compliance with line care bundles. We used this source to identify patients with either type of CVC (CVLs or PICCs) on 8 days in August 2014, September 2014, and October 2014. Patients were included if they had a CVC and were on a general medical or surgical ward bed on audit day. CVLs at all sites were included (femoral, subclavian, and internal jugular). Patients in an intensive care unit (ICU) or progressive care unit on the day of the audit were excluded. Patients whose catheters were for chemotherapy and those admitted for a transplant or receiving palliative or hospice care were also excluded.

Data Collection

A protocol for data collection was written out, and a training session was held to review definitions, data sources, and methods to ensure consistency. Two authors (M.M. and J.D.) assisted by an experienced clinical nurse specialist collected data on the patients captured on audit days. Each chart was reviewed on the day of the audit, the 2 days preceding the audit day, and then followed until the patient was either discharged from the hospital or transferred to a higher level of care, died, or transitioned to palliative or hospice care. Demographics, details about the line, and the criteria for justified use were extracted from the electronic medical record.

Definitions

Justified and Idle Days

To justify the presence of a CVC on any given day, we used criteria that fell under 3 categories: intravenous (IV) access needs, unstable vitals, or meeting sepsis/systemic inflammatory response syndrome (SIRS) criteria (Table 1). For vital signs, a single abnormal reading was counted as fulfilling criteria for that day. If no criterion for justified use was met, the line was considered idle for that day.

Criteria to Justify the Presence of a Central Line
  • NOTE: If none of these criteria were met, the line was considered idle for that day. Abbreviations: IV, intravenous; TPN, total parenteral nutrition; SIRS, systemic inflammatory response syndrome; WBC, white blood count.

IV access needs
Expected duration of IV antibiotics >6 days
Administration of TPN
Anticipated requirement of home IV medications
Requirement of IV medications with documented difficult access
Hemorrhage requiring blood transfusions
Requiring more than 3 infusions
Requiring more than 2 infusions and blood transfusions
Abnormal vitals
Diastolic blood pressure >120 mm Hg
Systolic blood pressure <90 mm Hg
Systolic blood pressure >200 mm Hg
Heart rate >120 beats per minute
Heart rate <50 beats per minute
Respiratory rate >30 breaths per minute
Respiratory rate <10 breaths per minute
Oxygen saturation <90% as measured by pulse oximetry
Meeting SIRS criteria (2 or more of the following present)
Temp >38C, Temp <36C, heart rate >90 beats per minute, respiratory rate >20 breaths per minute, WBC >12,000/mm3, WBC <1,000/mm3, bandemia >10%

Qualifying IV access needs were defined similarly to those previously used,[5, 6] whereas those for SIRS followed the current consensus.[8] To determine the number of IV medications or infusions, the medication administration record was reviewed. If 3 or more infusions were found, their compatibility was checked using the same database that nurses use at our institution. Difficult IV access was inferred from the indication for line placement, coupled with the absence of documentation of a peripheral IV. Clinical progress notes were reviewed to extract information on the length of proposed IV antibiotic courses, and discharge instructions were reviewed to verify whether the line was removed prior to discharge or not. The cutoffs for diastolic blood pressure, respiratory rate, and oxygen saturation used to label patients hemodynamically labile are the same as those used by previous authors and also constitute the definition of hypertensive urgency.[5, 9] However, we diverged from the values previously used for tachycardia, bradycardia, and systolic hypotension using heart rates >120 and <50 beats per minute (compared to >130 and <40 beats per minute) and systolics <90 mm Hg (compared to <80 mm Hg) to justify the line.[5] Early warning scores have been used to identify hospitalized ward patients who are at risk for clinical deterioration. Although each score utilizes different thresholds, the risk for clinical deterioration increases as the vitals worsen.[10] Bearing this in mind, the thresholds we elected to use are more clinically conservative and also parallel the nursing call orders currently used at our institution.

Proposed Guideline

We propose the guideline that a CVC may be safely removed the day after the first idle day.

RESULTS

A total of 126 lines were observed in 126 patients. Eighty‐three (65.9%) of the lines were PICCs. The remaining 43 (34.1%) were CVLs. The indications for line placement were distributed between the need for central access, total parenteral nutrition, or antibiotics (Table 2).

Description of the Study Cohort
Description Value
  • NOTE: Abbreviations: CVL, central venous line; IV, intravenous; PICC, peripherally inserted central catheter; SD, standard deviation; TPN, total parenteral nutrition.

Age in yrs mean (SD) 55.7 (18)
Gender, n (%)
Female 66 (52.4)
Male 60 (47.6)
Type of line, n (%)
PICC 83 (65.9)
CVL 43 (34.1)
Indication for line placement, n (%)
Meds requiring central access or TPN 36 (28.6)
Antibiotics 34 (27.0)
Hemodynamic instability 30 (23.8)
Poor access with multiple IV medications 18 (14.3)
Unknown 8 (6.3)
Line removed prior to discharge, n (%)
Yes 76 (60.3)
No 50 (39.7)

Out of the 126 patients, 50 (39.7%) were discharged from the hospital, died, were transferred to a higher level of care, or transitioned to palliative or hospice care with the line in place. In the remaining 76 patients, the audit captured 635 days, out of which a line was in place for 522 (82.2%) days. Of these 522 days, the line's presence was justified by our criteria for 351 (67.2%) days. The most common reason for a line to be justified on any given day was the need for antibiotics followed by the presence of SIRS criteria (Table 3). The remaining 171 (32.7%) days were idle.

Criteria Met for the 351 Justified Line Days
Criteria N %
  • NOTE: Abbreviations: IV, intravenous; SIRS, systemic inflammatory response syndrome; TPN, total parenteral nutrition; hr: heart rate; bp. blood pressure. *Totals exceed 100% because multiple indications may exist.

No. of factors justifying use
1 184 52.4%
2 127 36.2%
>2 40 11.4
Reason for justifying line*
Anticipate home or >6 days of antibiotic use 181 51.6
SIRS criteria 124 35.3
TPN 96 27.4
Hemodynamic instability based on hr and bp 78 22.2
Poor access with need for IV medications 57 16.2
Respiratory rate (<10 or >30/minute) 25 7.1
Active hemorrhage requiring transfusions 12 3.4
>3 infusions 6 1.7

A comparison of the actual removal of the 76 central lines in practice relative to the proposed guideline of removing it the day following the first idle day is displayed in Figure 1. The central line was removed prior to our proposed guideline in 11 (14.5%) patients, and waiting for an idle day in these patients would have added 46 line days. In almost half the patients (n = 36, 47.4%), the line was removed in agreement with the proposed guideline. None of the patients in whom the line was removed prior to or in accordance with our proposed guideline required a line reinsertion. Line removal was delayed in 29 (38.2%) patients when compared to our proposed guideline. In these patients, following the guideline would have created 122 line‐free days. Most (n = 102, 83.6%) of these potential line‐free days were idle. Twenty (16.4%) were justified, of which half (n = 10) were justified by meeting SIRS criteria.

Figure 1
Pictorial demonstration of the comparison between line removal in practice and the proposed guideline of removing it the day following the first idle day. Each bar represents 1 of the 76 patients in whom the line was removed prior to discharge. The diamond represents the actual removal of the line in practice. The bar is red to indicate that the line will remain in place according to our proposed guideline. It turns to green the day following the first idle day indicating that our guideline would recommend line removal.

DISCUSSION

Approximately 1 in every 25 inpatients in the United States has at least 1 HAI on any given day.[11] The case fatality rate from a CLABSI may be as high as 12%, and up to 70% of these infections may be preventable.[1, 12] Interventions successful in decreasing CLABSIs have focused on patients in ICUs.[13] However, CVCs are increasingly prevalent outside the ICU, with over 4.5 million line days in non‐ICU beds reported to the National Healthcare Safety Network in 2012 compared to 2.5 million in 2010.[2, 14] However, adherence rates to infection control practices may be lower on the wards than in the ICUs.[6, 15] Consequently, although the number of CLABSIs has declined over the last decade, most are now occurring outside the ICU.[16] These trends underscore the need to develop strategies aimed at CLABSI prevention on the floors.

Analogous to the life cycle of a urinary catheter described by Meddings et al.,[17] strategies to prevent CLABSIs and other CVC‐related complications may be designed around the life cycle of a CVC. The life cycle starts with insertion and moves on to the maintenance, removal, and possible reinsertion of the line. The process thus starts with the decision to place the line. Over the last decade, this decision making has changed in part due to PICCs. This shift is reflected in PICC prevalence rates: in 2001, 11% of audited central lines were PICCs compared to 56% in 2007.[5, 6] In our audit, 66% of the CVCs were PICCs. This increase in the use of PICCs may be attributable to the ease and safety of their placement coupled with the increased availability of vascular access placement teams.[18] The risk of overuse that may result from such expediency may be countered by adhering to guidelines such as the Michigan Appropriateness Guide for Intravenous Catheters, which provides both clinically detailed guidance and an impetus for reflective decision making around intravenous access.[19]

The placement of CVCs for prolonged parenteral antibiotics may be a particular subset that bears further exploration. Similar to previous reports, we found that a large number of the CVCs were both inserted for and justified by the need for IV antibiotics.[5] Guidelines delineated by the Infectious Diseases Society of America regarding outpatient parenteral antibiotics weigh both the duration of therapy and the antimicrobial's potential for causing phlebitis when recommending the type of intravascular access.[20] Many courses may therefore be completed through peripheral or midline catheters. Developing strong partnerships between infectious disease specialists, hospitalists, and the facilities or home‐care services treating these patients may curtail the use of CVCs for antimicrobial administration.

The main focus of our work is on facilitating the safe removal of CVCs. The risk of CLABSIs increases each day a CVC is in place, and guidelines to prevent CLABSIs include recommendations to promptly remove nonessential catheters.[4, 21] There is also an emerging understanding that the risk of a PICC‐related CLABSI approaches that from a traditional central line in hospitalized patients, and PICCs confer an increased risk of venous thromboembolism.[18, 22] Although nearly half of surveyed hospitalists recently reported leaving PICCs in place until discharge day, our data suggest that this practice may be driven by the trajectory of a patient's recovery as much as by knowledge gaps related to the use of PICCs.[23] In nearly half the instances, clinical practice already mirrors our proposed guideline, with line removal coinciding with both the timing proposed by our guideline and discharge day. However, there is room for improvement, as line removal may have been expedited in the 29 patients in whom the line was retained after the first idle day. Maintaining an awareness of its presence and weighing its risks and benefits daily may facilitate the removal of a CVC. Based on the recent findings that up to a quarter of clinicians are unaware that their patients have a central line, the mere reminder of the presence of a line using such criteria may expedite its removal by triggering a purposeful reassessment of its ongoing need.[24] Premature CVC removal requiring line reinsertion is an unintended consequence that may emerge from the earlier removal of lines. In our sample, none of the patients who had lines removed either prior to or in accordance with our proposed guideline required a line reinsertion. In addition to line reinsertion, delays in laboratory testing and reporting due to the unavailability of access, increased patient discomfort, or increased workload on the bedside nurse or vascular access team must also be considered when implementing strategies aimed at decreasing line days.

We envisage using these criteria to both empower practitioners with knowledge and foster shared accountability between all team members by using a uniform tool. This can occur through partnerships between infection control, clinical nurse specialists, bedside nursing, and physicians. The electronic medical record could be leveraged to scan the record for the criteria and create a notification when the line becomes idle. In alignment with the Michigan Appropriateness Guide for Intravenous Catheters guidelines, we do not support the removal of lines by nursing staff without physician notification.[19] Such principles have been successfully harnessed in strategies to prevent both catheter‐associated urinary tract infections and CLABSIs in ICUs.[13, 25] In light of the complexity surrounding the decision making for CVCs, our criteria were focused on the wards and erred on the side of clinical caution. This clinical conservatism is apparent in the patients in whom lines were removed prior to what our guideline would propose, yet none of the patients required a line reinsertion. As concerns about recrudescent clinical instability may drive decision making around line removal, such conservatism may be warranted initially. However, the fidelity of these criteria in the clinical setting will need prospective validation. In particular, the inclusion of SIRS criteria may have led to an overestimation of justified days. Further studies may be needed to refine the criteria and find a clinical hierarchy that balances the risks and benefits of retaining a central line.

Our work has certain limitations. It is a single center's experience, and our findings may not therefore be generalizable. Except for when the indication for the line was for difficult access, we did not attempt to verify the presence of a peripheral IV. This, in combination with the inclusion of SIRS criteria, likely leads to an underestimation of idle days. In the interest of focusing on patients in whom the decision making around a line would be the least controversial, we did not continue to follow patients who were transferred to a higher level of care. It is possible, however, that these transfers were precipitated by line‐associated complications such as sepsis and would be important to track. We did not measure the agreement between data collectors, although definitions and methodologies were standardized and reviewed prior to data collection. As this was an observational assessment of a proposed guideline, we cannot predict how the recommendations generated by it will be received by clinicians. Although this may prove to be a barrier in adoption, we hope that the conversation it initiates leads to change.

Hospitalists are positioned to potentially influence the entire life cycle of a central line on the floor. Strategies can be enacted at each stage to help decrease the potential of harm from these devices to our patients. Creating and testing criteria and guidelines such as we propose represents just 1 such strategy in a multidisciplinary effort to provide the best possible care we can.

Acknowledgements

The authors thank Jennifer Dunscomb, Kristen Kelly, and their teams, and Deanna Sidwell, Todd Biggerstaff, Joan Miller, Rob Clark, and the tireless providers at Indiana University Health Methodist Hospital for their support.

Disclosures: This work was supported by the Indiana University Health Values Grant for research. The authors have no conflicts of interests to report.

Files
References
  1. Umscheid CA, Mitchell MD, Doshi JA, Agarwal R, Williams K, Brennan PJ. Estimating the proportion of healthcare‐associated infections that are reasonably preventable and the related mortality and costs. Infect Control Hosp Epidemiol. 2011;32(2):101114.
  2. Dudeck MA, Weiner LM, Allen‐Bridson K, et al. National Healthcare Safety Network (NHSN) report, data summary for 2012, device‐associated module. Am J Infect Control. 2013;41(12):11481166.
  3. Maki DG, Kluger DM, Crinch CJ. The risk of bloodstream infection in adults with different intravascular devices: a systematic review of 200 published prospective studies. Mayo Clin Proc. 2006;81(9):11591171.
  4. O'Grady NP, Alexander M, Burns LA, et al. Guidelines for the prevention of intravascular catheter‐related infections. Clin Infect Dis. 2011;52(9):e162e193.
  5. Chernetsky Tejedor S, Tong D, Stein J, et al. Temporary central venous catheter utilization patterns in a large tertiary care center: tracking the “idle central venous catheter.” Infect Control Hosp Epidemiol. 2012;33(1):5057.
  6. Trick WE, Vernon M, Welbel SF, Wisniewski MF, Jernigan JA, Weinstein RA. Unnecessary use of central venous catheters: the need to look outside the intensive care unit. Infect Control Hosp Epidemiol. 2004;25(3):266268.
  7. IU Health Methodist Hospital website. Available at: http://iuhealth.org/methodist/aboIut. Accessed October 20, 2014.
  8. Bone RC, Balk RA, Cerra FB, et al. Definitions for Sepsis and Organ Failure and Guidelines for the Use of Innovative Therapies in Sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 2009;136(5 suppl):e28.
  9. Pak KJ, Hu T, Fee C, Wang R, Smith M, Bazzano LA. Acute hypertension: a systematic review and appraisal of guidelines. Ochsner J. 2014;14(4):655663.
  10. Churpek MM, Yuen TC, Edelson DP. Predicting clinical deterioration in the hospital: the impact of outcome selection. Resuscitation. 2013;84(5):564568.
  11. Magill SS, Edwards JR, Bamberg W, et al. Multistate point‐prevalence survey of health care–associated infections. N Engl J Med. 2014;370(13):11981208.
  12. Klevens RM, Edwards JR, Richards CL, et al. Estimating health care‐associated infections and deaths in U.S. hospitals, 2002. Public Health Rep. 2007;122(2):160166.
  13. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter‐related bloodstream infections in the ICU. N Engl J Med. 2006;355(26):27252732.
  14. Dudeck MA, Horan TC, Peterson KD, et al. Data summary for 2011, device‐associated module. Centers for Disease Control and Prevention. National Healthcare Safety Network (NHSN) Report. Available at: http://www.cdc.gov/nhsn/PDFs/dataStat/NHSN‐Report‐2011‐Data‐Summary.pdf. Published April 1, 2013. Last accessed January 2015.
  15. Burdeu G, Currey J, Pilcher D. Idle central venous catheter‐days pose infection risk for patients after discharge from intensive care. Am J Infect Control. 2014;42(4):453455.
  16. Liang SY, Marschall J. Update on emerging infections: news from the Centers for Disease Control and Prevention. Vital signs: central line‐associated blood stream infections—United States, 2001, 2008, and 2009. Ann Emerg Med. 2011;58(5):447451.
  17. Meddings J, Rogers MAM, Krein SL, Fakih MG, Olmsted RN, Saint S. Reducing unnecessary urinary catheter use and other strategies to prevent catheter‐associated urinary tract infection: an integrative review. BMJ Qual Saf. 2014;23(4):277289.
  18. Chopra V, O'Horo JC, Rogers MAM, Maki DG, Safdar N. The risk of bloodstream infection associated with peripherally inserted central catheters compared with central venous catheters in adults: a systematic review and meta‐analysis. Infect Control Hosp Epidemiol. 2013;34(9):908918.
  19. Chopra V, Flanders SA, Saint S, et al. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA Appropriateness Method. Ann Intern Med. 2015;163(6 suppl):S1S40.
  20. Tice AD, Rehm SJ, Dalovisio JR, et al. Practice guidelines for outpatient parenteral antimicrobial therapy. IDSA guidelines. Clin Infect Dis. 2004;38(12):16511672.
  21. McLaws M‐L, Berry G. Nonuniform risk of bloodstream infection with increasing central venous catheter‐days. Infect Control Hosp Epidemiol. 2005;26(8):715719.
  22. Chopra V, Anand S, Hickner A, et al. Risk of venous thromboembolism associated with peripherally inserted central catheters: a systematic review and meta‐analysis. Lancet. 2013;382(9889):311325.
  23. Chopra V, Kuhn L, Flanders SA, Saint S, Krein SL. Hospitalist experiences, practice, opinions, and knowledge regarding peripherally inserted central catheters: results of a national survey. J Hosp Med. 2013;8(11):635638.
  24. Chopra V, Govindan S, Kuhn L, et al. Do clinicians know which of their patients have central venous catheters? Ann Intern Med. 2014;161(8):562.
  25. Reilly L, Sullivan P, Ninni S, Fochesto D, Williams K, Fetherman B. Reducing foley catheter device days in an intensive care unit: using the evidence to change practice. AACN Adv Crit Care. 2006;17(3):272283.
Article PDF
Issue
Journal of Hospital Medicine - 11(7)
Page Number
489-493
Sections
Files
Files
Article PDF
Article PDF

Infections acquired in the hospital are termed healthcare‐associated infections (HAIs) and include central lineassociated blood stream infections (CLABSIs). Among HAIs, CLABSIs cause the highest number of preventable deaths.[1] Central venous catheters (CVCs) or central lines are commonly used in the hospital.[2] Each year their use is linked to 250,000 cases of CLABSIs in the United States.[3] Some CLABSIs may be prevented by the prompt removal of the line.[4] However, CVCs are often retained after their clinical indication has lapsed and are then referred to as idle lines.[5, 6] In this work, we propose and theoretically test a guideline to facilitate the safe removal of an idle line by observing the agreement and disagreement between actual practice and the proposed guideline.

METHODS

Setting

This work was conducted at a large, urban, tertiary care, academic health center in the United States as a collaborative effort to improve quality at our institution.[7]

Design and Patients

The reports linked with the electronic medical records at our institution include a daily, ward‐by‐ward listing of patients who have access other than a peripheral line in place. This central line dashboard accesses the information on intravenous access charted by bedside nurses to create a list of patients on every ward who have any kind of central access. Temporary central venous lines (CVLs), peripherally inserted central catheters (PICCs), ports, and dialysis catheters are all included. The unit charge nurses and managers use this dashboard to facilitate compliance with line care bundles. We used this source to identify patients with either type of CVC (CVLs or PICCs) on 8 days in August 2014, September 2014, and October 2014. Patients were included if they had a CVC and were on a general medical or surgical ward bed on audit day. CVLs at all sites were included (femoral, subclavian, and internal jugular). Patients in an intensive care unit (ICU) or progressive care unit on the day of the audit were excluded. Patients whose catheters were for chemotherapy and those admitted for a transplant or receiving palliative or hospice care were also excluded.

Data Collection

A protocol for data collection was written out, and a training session was held to review definitions, data sources, and methods to ensure consistency. Two authors (M.M. and J.D.) assisted by an experienced clinical nurse specialist collected data on the patients captured on audit days. Each chart was reviewed on the day of the audit, the 2 days preceding the audit day, and then followed until the patient was either discharged from the hospital or transferred to a higher level of care, died, or transitioned to palliative or hospice care. Demographics, details about the line, and the criteria for justified use were extracted from the electronic medical record.

Definitions

Justified and Idle Days

To justify the presence of a CVC on any given day, we used criteria that fell under 3 categories: intravenous (IV) access needs, unstable vitals, or meeting sepsis/systemic inflammatory response syndrome (SIRS) criteria (Table 1). For vital signs, a single abnormal reading was counted as fulfilling criteria for that day. If no criterion for justified use was met, the line was considered idle for that day.

Criteria to Justify the Presence of a Central Line
  • NOTE: If none of these criteria were met, the line was considered idle for that day. Abbreviations: IV, intravenous; TPN, total parenteral nutrition; SIRS, systemic inflammatory response syndrome; WBC, white blood count.

IV access needs
Expected duration of IV antibiotics >6 days
Administration of TPN
Anticipated requirement of home IV medications
Requirement of IV medications with documented difficult access
Hemorrhage requiring blood transfusions
Requiring more than 3 infusions
Requiring more than 2 infusions and blood transfusions
Abnormal vitals
Diastolic blood pressure >120 mm Hg
Systolic blood pressure <90 mm Hg
Systolic blood pressure >200 mm Hg
Heart rate >120 beats per minute
Heart rate <50 beats per minute
Respiratory rate >30 breaths per minute
Respiratory rate <10 breaths per minute
Oxygen saturation <90% as measured by pulse oximetry
Meeting SIRS criteria (2 or more of the following present)
Temp >38C, Temp <36C, heart rate >90 beats per minute, respiratory rate >20 breaths per minute, WBC >12,000/mm3, WBC <1,000/mm3, bandemia >10%

Qualifying IV access needs were defined similarly to those previously used,[5, 6] whereas those for SIRS followed the current consensus.[8] To determine the number of IV medications or infusions, the medication administration record was reviewed. If 3 or more infusions were found, their compatibility was checked using the same database that nurses use at our institution. Difficult IV access was inferred from the indication for line placement, coupled with the absence of documentation of a peripheral IV. Clinical progress notes were reviewed to extract information on the length of proposed IV antibiotic courses, and discharge instructions were reviewed to verify whether the line was removed prior to discharge or not. The cutoffs for diastolic blood pressure, respiratory rate, and oxygen saturation used to label patients hemodynamically labile are the same as those used by previous authors and also constitute the definition of hypertensive urgency.[5, 9] However, we diverged from the values previously used for tachycardia, bradycardia, and systolic hypotension using heart rates >120 and <50 beats per minute (compared to >130 and <40 beats per minute) and systolics <90 mm Hg (compared to <80 mm Hg) to justify the line.[5] Early warning scores have been used to identify hospitalized ward patients who are at risk for clinical deterioration. Although each score utilizes different thresholds, the risk for clinical deterioration increases as the vitals worsen.[10] Bearing this in mind, the thresholds we elected to use are more clinically conservative and also parallel the nursing call orders currently used at our institution.

Proposed Guideline

We propose the guideline that a CVC may be safely removed the day after the first idle day.

RESULTS

A total of 126 lines were observed in 126 patients. Eighty‐three (65.9%) of the lines were PICCs. The remaining 43 (34.1%) were CVLs. The indications for line placement were distributed between the need for central access, total parenteral nutrition, or antibiotics (Table 2).

Description of the Study Cohort
Description Value
  • NOTE: Abbreviations: CVL, central venous line; IV, intravenous; PICC, peripherally inserted central catheter; SD, standard deviation; TPN, total parenteral nutrition.

Age in yrs mean (SD) 55.7 (18)
Gender, n (%)
Female 66 (52.4)
Male 60 (47.6)
Type of line, n (%)
PICC 83 (65.9)
CVL 43 (34.1)
Indication for line placement, n (%)
Meds requiring central access or TPN 36 (28.6)
Antibiotics 34 (27.0)
Hemodynamic instability 30 (23.8)
Poor access with multiple IV medications 18 (14.3)
Unknown 8 (6.3)
Line removed prior to discharge, n (%)
Yes 76 (60.3)
No 50 (39.7)

Out of the 126 patients, 50 (39.7%) were discharged from the hospital, died, were transferred to a higher level of care, or transitioned to palliative or hospice care with the line in place. In the remaining 76 patients, the audit captured 635 days, out of which a line was in place for 522 (82.2%) days. Of these 522 days, the line's presence was justified by our criteria for 351 (67.2%) days. The most common reason for a line to be justified on any given day was the need for antibiotics followed by the presence of SIRS criteria (Table 3). The remaining 171 (32.7%) days were idle.

Criteria Met for the 351 Justified Line Days
Criteria N %
  • NOTE: Abbreviations: IV, intravenous; SIRS, systemic inflammatory response syndrome; TPN, total parenteral nutrition; hr: heart rate; bp. blood pressure. *Totals exceed 100% because multiple indications may exist.

No. of factors justifying use
1 184 52.4%
2 127 36.2%
>2 40 11.4
Reason for justifying line*
Anticipate home or >6 days of antibiotic use 181 51.6
SIRS criteria 124 35.3
TPN 96 27.4
Hemodynamic instability based on hr and bp 78 22.2
Poor access with need for IV medications 57 16.2
Respiratory rate (<10 or >30/minute) 25 7.1
Active hemorrhage requiring transfusions 12 3.4
>3 infusions 6 1.7

A comparison of the actual removal of the 76 central lines in practice relative to the proposed guideline of removing it the day following the first idle day is displayed in Figure 1. The central line was removed prior to our proposed guideline in 11 (14.5%) patients, and waiting for an idle day in these patients would have added 46 line days. In almost half the patients (n = 36, 47.4%), the line was removed in agreement with the proposed guideline. None of the patients in whom the line was removed prior to or in accordance with our proposed guideline required a line reinsertion. Line removal was delayed in 29 (38.2%) patients when compared to our proposed guideline. In these patients, following the guideline would have created 122 line‐free days. Most (n = 102, 83.6%) of these potential line‐free days were idle. Twenty (16.4%) were justified, of which half (n = 10) were justified by meeting SIRS criteria.

Figure 1
Pictorial demonstration of the comparison between line removal in practice and the proposed guideline of removing it the day following the first idle day. Each bar represents 1 of the 76 patients in whom the line was removed prior to discharge. The diamond represents the actual removal of the line in practice. The bar is red to indicate that the line will remain in place according to our proposed guideline. It turns to green the day following the first idle day indicating that our guideline would recommend line removal.

DISCUSSION

Approximately 1 in every 25 inpatients in the United States has at least 1 HAI on any given day.[11] The case fatality rate from a CLABSI may be as high as 12%, and up to 70% of these infections may be preventable.[1, 12] Interventions successful in decreasing CLABSIs have focused on patients in ICUs.[13] However, CVCs are increasingly prevalent outside the ICU, with over 4.5 million line days in non‐ICU beds reported to the National Healthcare Safety Network in 2012 compared to 2.5 million in 2010.[2, 14] However, adherence rates to infection control practices may be lower on the wards than in the ICUs.[6, 15] Consequently, although the number of CLABSIs has declined over the last decade, most are now occurring outside the ICU.[16] These trends underscore the need to develop strategies aimed at CLABSI prevention on the floors.

Analogous to the life cycle of a urinary catheter described by Meddings et al.,[17] strategies to prevent CLABSIs and other CVC‐related complications may be designed around the life cycle of a CVC. The life cycle starts with insertion and moves on to the maintenance, removal, and possible reinsertion of the line. The process thus starts with the decision to place the line. Over the last decade, this decision making has changed in part due to PICCs. This shift is reflected in PICC prevalence rates: in 2001, 11% of audited central lines were PICCs compared to 56% in 2007.[5, 6] In our audit, 66% of the CVCs were PICCs. This increase in the use of PICCs may be attributable to the ease and safety of their placement coupled with the increased availability of vascular access placement teams.[18] The risk of overuse that may result from such expediency may be countered by adhering to guidelines such as the Michigan Appropriateness Guide for Intravenous Catheters, which provides both clinically detailed guidance and an impetus for reflective decision making around intravenous access.[19]

The placement of CVCs for prolonged parenteral antibiotics may be a particular subset that bears further exploration. Similar to previous reports, we found that a large number of the CVCs were both inserted for and justified by the need for IV antibiotics.[5] Guidelines delineated by the Infectious Diseases Society of America regarding outpatient parenteral antibiotics weigh both the duration of therapy and the antimicrobial's potential for causing phlebitis when recommending the type of intravascular access.[20] Many courses may therefore be completed through peripheral or midline catheters. Developing strong partnerships between infectious disease specialists, hospitalists, and the facilities or home‐care services treating these patients may curtail the use of CVCs for antimicrobial administration.

The main focus of our work is on facilitating the safe removal of CVCs. The risk of CLABSIs increases each day a CVC is in place, and guidelines to prevent CLABSIs include recommendations to promptly remove nonessential catheters.[4, 21] There is also an emerging understanding that the risk of a PICC‐related CLABSI approaches that from a traditional central line in hospitalized patients, and PICCs confer an increased risk of venous thromboembolism.[18, 22] Although nearly half of surveyed hospitalists recently reported leaving PICCs in place until discharge day, our data suggest that this practice may be driven by the trajectory of a patient's recovery as much as by knowledge gaps related to the use of PICCs.[23] In nearly half the instances, clinical practice already mirrors our proposed guideline, with line removal coinciding with both the timing proposed by our guideline and discharge day. However, there is room for improvement, as line removal may have been expedited in the 29 patients in whom the line was retained after the first idle day. Maintaining an awareness of its presence and weighing its risks and benefits daily may facilitate the removal of a CVC. Based on the recent findings that up to a quarter of clinicians are unaware that their patients have a central line, the mere reminder of the presence of a line using such criteria may expedite its removal by triggering a purposeful reassessment of its ongoing need.[24] Premature CVC removal requiring line reinsertion is an unintended consequence that may emerge from the earlier removal of lines. In our sample, none of the patients who had lines removed either prior to or in accordance with our proposed guideline required a line reinsertion. In addition to line reinsertion, delays in laboratory testing and reporting due to the unavailability of access, increased patient discomfort, or increased workload on the bedside nurse or vascular access team must also be considered when implementing strategies aimed at decreasing line days.

We envisage using these criteria to both empower practitioners with knowledge and foster shared accountability between all team members by using a uniform tool. This can occur through partnerships between infection control, clinical nurse specialists, bedside nursing, and physicians. The electronic medical record could be leveraged to scan the record for the criteria and create a notification when the line becomes idle. In alignment with the Michigan Appropriateness Guide for Intravenous Catheters guidelines, we do not support the removal of lines by nursing staff without physician notification.[19] Such principles have been successfully harnessed in strategies to prevent both catheter‐associated urinary tract infections and CLABSIs in ICUs.[13, 25] In light of the complexity surrounding the decision making for CVCs, our criteria were focused on the wards and erred on the side of clinical caution. This clinical conservatism is apparent in the patients in whom lines were removed prior to what our guideline would propose, yet none of the patients required a line reinsertion. As concerns about recrudescent clinical instability may drive decision making around line removal, such conservatism may be warranted initially. However, the fidelity of these criteria in the clinical setting will need prospective validation. In particular, the inclusion of SIRS criteria may have led to an overestimation of justified days. Further studies may be needed to refine the criteria and find a clinical hierarchy that balances the risks and benefits of retaining a central line.

Our work has certain limitations. It is a single center's experience, and our findings may not therefore be generalizable. Except for when the indication for the line was for difficult access, we did not attempt to verify the presence of a peripheral IV. This, in combination with the inclusion of SIRS criteria, likely leads to an underestimation of idle days. In the interest of focusing on patients in whom the decision making around a line would be the least controversial, we did not continue to follow patients who were transferred to a higher level of care. It is possible, however, that these transfers were precipitated by line‐associated complications such as sepsis and would be important to track. We did not measure the agreement between data collectors, although definitions and methodologies were standardized and reviewed prior to data collection. As this was an observational assessment of a proposed guideline, we cannot predict how the recommendations generated by it will be received by clinicians. Although this may prove to be a barrier in adoption, we hope that the conversation it initiates leads to change.

Hospitalists are positioned to potentially influence the entire life cycle of a central line on the floor. Strategies can be enacted at each stage to help decrease the potential of harm from these devices to our patients. Creating and testing criteria and guidelines such as we propose represents just 1 such strategy in a multidisciplinary effort to provide the best possible care we can.

Acknowledgements

The authors thank Jennifer Dunscomb, Kristen Kelly, and their teams, and Deanna Sidwell, Todd Biggerstaff, Joan Miller, Rob Clark, and the tireless providers at Indiana University Health Methodist Hospital for their support.

Disclosures: This work was supported by the Indiana University Health Values Grant for research. The authors have no conflicts of interests to report.

Infections acquired in the hospital are termed healthcare‐associated infections (HAIs) and include central lineassociated blood stream infections (CLABSIs). Among HAIs, CLABSIs cause the highest number of preventable deaths.[1] Central venous catheters (CVCs) or central lines are commonly used in the hospital.[2] Each year their use is linked to 250,000 cases of CLABSIs in the United States.[3] Some CLABSIs may be prevented by the prompt removal of the line.[4] However, CVCs are often retained after their clinical indication has lapsed and are then referred to as idle lines.[5, 6] In this work, we propose and theoretically test a guideline to facilitate the safe removal of an idle line by observing the agreement and disagreement between actual practice and the proposed guideline.

METHODS

Setting

This work was conducted at a large, urban, tertiary care, academic health center in the United States as a collaborative effort to improve quality at our institution.[7]

Design and Patients

The reports linked with the electronic medical records at our institution include a daily, ward‐by‐ward listing of patients who have access other than a peripheral line in place. This central line dashboard accesses the information on intravenous access charted by bedside nurses to create a list of patients on every ward who have any kind of central access. Temporary central venous lines (CVLs), peripherally inserted central catheters (PICCs), ports, and dialysis catheters are all included. The unit charge nurses and managers use this dashboard to facilitate compliance with line care bundles. We used this source to identify patients with either type of CVC (CVLs or PICCs) on 8 days in August 2014, September 2014, and October 2014. Patients were included if they had a CVC and were on a general medical or surgical ward bed on audit day. CVLs at all sites were included (femoral, subclavian, and internal jugular). Patients in an intensive care unit (ICU) or progressive care unit on the day of the audit were excluded. Patients whose catheters were for chemotherapy and those admitted for a transplant or receiving palliative or hospice care were also excluded.

Data Collection

A protocol for data collection was written out, and a training session was held to review definitions, data sources, and methods to ensure consistency. Two authors (M.M. and J.D.) assisted by an experienced clinical nurse specialist collected data on the patients captured on audit days. Each chart was reviewed on the day of the audit, the 2 days preceding the audit day, and then followed until the patient was either discharged from the hospital or transferred to a higher level of care, died, or transitioned to palliative or hospice care. Demographics, details about the line, and the criteria for justified use were extracted from the electronic medical record.

Definitions

Justified and Idle Days

To justify the presence of a CVC on any given day, we used criteria that fell under 3 categories: intravenous (IV) access needs, unstable vitals, or meeting sepsis/systemic inflammatory response syndrome (SIRS) criteria (Table 1). For vital signs, a single abnormal reading was counted as fulfilling criteria for that day. If no criterion for justified use was met, the line was considered idle for that day.

Criteria to Justify the Presence of a Central Line
  • NOTE: If none of these criteria were met, the line was considered idle for that day. Abbreviations: IV, intravenous; TPN, total parenteral nutrition; SIRS, systemic inflammatory response syndrome; WBC, white blood count.

IV access needs
Expected duration of IV antibiotics >6 days
Administration of TPN
Anticipated requirement of home IV medications
Requirement of IV medications with documented difficult access
Hemorrhage requiring blood transfusions
Requiring more than 3 infusions
Requiring more than 2 infusions and blood transfusions
Abnormal vitals
Diastolic blood pressure >120 mm Hg
Systolic blood pressure <90 mm Hg
Systolic blood pressure >200 mm Hg
Heart rate >120 beats per minute
Heart rate <50 beats per minute
Respiratory rate >30 breaths per minute
Respiratory rate <10 breaths per minute
Oxygen saturation <90% as measured by pulse oximetry
Meeting SIRS criteria (2 or more of the following present)
Temp >38C, Temp <36C, heart rate >90 beats per minute, respiratory rate >20 breaths per minute, WBC >12,000/mm3, WBC <1,000/mm3, bandemia >10%

Qualifying IV access needs were defined similarly to those previously used,[5, 6] whereas those for SIRS followed the current consensus.[8] To determine the number of IV medications or infusions, the medication administration record was reviewed. If 3 or more infusions were found, their compatibility was checked using the same database that nurses use at our institution. Difficult IV access was inferred from the indication for line placement, coupled with the absence of documentation of a peripheral IV. Clinical progress notes were reviewed to extract information on the length of proposed IV antibiotic courses, and discharge instructions were reviewed to verify whether the line was removed prior to discharge or not. The cutoffs for diastolic blood pressure, respiratory rate, and oxygen saturation used to label patients hemodynamically labile are the same as those used by previous authors and also constitute the definition of hypertensive urgency.[5, 9] However, we diverged from the values previously used for tachycardia, bradycardia, and systolic hypotension using heart rates >120 and <50 beats per minute (compared to >130 and <40 beats per minute) and systolics <90 mm Hg (compared to <80 mm Hg) to justify the line.[5] Early warning scores have been used to identify hospitalized ward patients who are at risk for clinical deterioration. Although each score utilizes different thresholds, the risk for clinical deterioration increases as the vitals worsen.[10] Bearing this in mind, the thresholds we elected to use are more clinically conservative and also parallel the nursing call orders currently used at our institution.

Proposed Guideline

We propose the guideline that a CVC may be safely removed the day after the first idle day.

RESULTS

A total of 126 lines were observed in 126 patients. Eighty‐three (65.9%) of the lines were PICCs. The remaining 43 (34.1%) were CVLs. The indications for line placement were distributed between the need for central access, total parenteral nutrition, or antibiotics (Table 2).

Description of the Study Cohort
Description Value
  • NOTE: Abbreviations: CVL, central venous line; IV, intravenous; PICC, peripherally inserted central catheter; SD, standard deviation; TPN, total parenteral nutrition.

Age in yrs mean (SD) 55.7 (18)
Gender, n (%)
Female 66 (52.4)
Male 60 (47.6)
Type of line, n (%)
PICC 83 (65.9)
CVL 43 (34.1)
Indication for line placement, n (%)
Meds requiring central access or TPN 36 (28.6)
Antibiotics 34 (27.0)
Hemodynamic instability 30 (23.8)
Poor access with multiple IV medications 18 (14.3)
Unknown 8 (6.3)
Line removed prior to discharge, n (%)
Yes 76 (60.3)
No 50 (39.7)

Out of the 126 patients, 50 (39.7%) were discharged from the hospital, died, were transferred to a higher level of care, or transitioned to palliative or hospice care with the line in place. In the remaining 76 patients, the audit captured 635 days, out of which a line was in place for 522 (82.2%) days. Of these 522 days, the line's presence was justified by our criteria for 351 (67.2%) days. The most common reason for a line to be justified on any given day was the need for antibiotics followed by the presence of SIRS criteria (Table 3). The remaining 171 (32.7%) days were idle.

Criteria Met for the 351 Justified Line Days
Criteria N %
  • NOTE: Abbreviations: IV, intravenous; SIRS, systemic inflammatory response syndrome; TPN, total parenteral nutrition; hr: heart rate; bp. blood pressure. *Totals exceed 100% because multiple indications may exist.

No. of factors justifying use
1 184 52.4%
2 127 36.2%
>2 40 11.4
Reason for justifying line*
Anticipate home or >6 days of antibiotic use 181 51.6
SIRS criteria 124 35.3
TPN 96 27.4
Hemodynamic instability based on hr and bp 78 22.2
Poor access with need for IV medications 57 16.2
Respiratory rate (<10 or >30/minute) 25 7.1
Active hemorrhage requiring transfusions 12 3.4
>3 infusions 6 1.7

A comparison of the actual removal of the 76 central lines in practice relative to the proposed guideline of removing it the day following the first idle day is displayed in Figure 1. The central line was removed prior to our proposed guideline in 11 (14.5%) patients, and waiting for an idle day in these patients would have added 46 line days. In almost half the patients (n = 36, 47.4%), the line was removed in agreement with the proposed guideline. None of the patients in whom the line was removed prior to or in accordance with our proposed guideline required a line reinsertion. Line removal was delayed in 29 (38.2%) patients when compared to our proposed guideline. In these patients, following the guideline would have created 122 line‐free days. Most (n = 102, 83.6%) of these potential line‐free days were idle. Twenty (16.4%) were justified, of which half (n = 10) were justified by meeting SIRS criteria.

Figure 1
Pictorial demonstration of the comparison between line removal in practice and the proposed guideline of removing it the day following the first idle day. Each bar represents 1 of the 76 patients in whom the line was removed prior to discharge. The diamond represents the actual removal of the line in practice. The bar is red to indicate that the line will remain in place according to our proposed guideline. It turns to green the day following the first idle day indicating that our guideline would recommend line removal.

DISCUSSION

Approximately 1 in every 25 inpatients in the United States has at least 1 HAI on any given day.[11] The case fatality rate from a CLABSI may be as high as 12%, and up to 70% of these infections may be preventable.[1, 12] Interventions successful in decreasing CLABSIs have focused on patients in ICUs.[13] However, CVCs are increasingly prevalent outside the ICU, with over 4.5 million line days in non‐ICU beds reported to the National Healthcare Safety Network in 2012 compared to 2.5 million in 2010.[2, 14] However, adherence rates to infection control practices may be lower on the wards than in the ICUs.[6, 15] Consequently, although the number of CLABSIs has declined over the last decade, most are now occurring outside the ICU.[16] These trends underscore the need to develop strategies aimed at CLABSI prevention on the floors.

Analogous to the life cycle of a urinary catheter described by Meddings et al.,[17] strategies to prevent CLABSIs and other CVC‐related complications may be designed around the life cycle of a CVC. The life cycle starts with insertion and moves on to the maintenance, removal, and possible reinsertion of the line. The process thus starts with the decision to place the line. Over the last decade, this decision making has changed in part due to PICCs. This shift is reflected in PICC prevalence rates: in 2001, 11% of audited central lines were PICCs compared to 56% in 2007.[5, 6] In our audit, 66% of the CVCs were PICCs. This increase in the use of PICCs may be attributable to the ease and safety of their placement coupled with the increased availability of vascular access placement teams.[18] The risk of overuse that may result from such expediency may be countered by adhering to guidelines such as the Michigan Appropriateness Guide for Intravenous Catheters, which provides both clinically detailed guidance and an impetus for reflective decision making around intravenous access.[19]

The placement of CVCs for prolonged parenteral antibiotics may be a particular subset that bears further exploration. Similar to previous reports, we found that a large number of the CVCs were both inserted for and justified by the need for IV antibiotics.[5] Guidelines delineated by the Infectious Diseases Society of America regarding outpatient parenteral antibiotics weigh both the duration of therapy and the antimicrobial's potential for causing phlebitis when recommending the type of intravascular access.[20] Many courses may therefore be completed through peripheral or midline catheters. Developing strong partnerships between infectious disease specialists, hospitalists, and the facilities or home‐care services treating these patients may curtail the use of CVCs for antimicrobial administration.

The main focus of our work is on facilitating the safe removal of CVCs. The risk of CLABSIs increases each day a CVC is in place, and guidelines to prevent CLABSIs include recommendations to promptly remove nonessential catheters.[4, 21] There is also an emerging understanding that the risk of a PICC‐related CLABSI approaches that from a traditional central line in hospitalized patients, and PICCs confer an increased risk of venous thromboembolism.[18, 22] Although nearly half of surveyed hospitalists recently reported leaving PICCs in place until discharge day, our data suggest that this practice may be driven by the trajectory of a patient's recovery as much as by knowledge gaps related to the use of PICCs.[23] In nearly half the instances, clinical practice already mirrors our proposed guideline, with line removal coinciding with both the timing proposed by our guideline and discharge day. However, there is room for improvement, as line removal may have been expedited in the 29 patients in whom the line was retained after the first idle day. Maintaining an awareness of its presence and weighing its risks and benefits daily may facilitate the removal of a CVC. Based on the recent findings that up to a quarter of clinicians are unaware that their patients have a central line, the mere reminder of the presence of a line using such criteria may expedite its removal by triggering a purposeful reassessment of its ongoing need.[24] Premature CVC removal requiring line reinsertion is an unintended consequence that may emerge from the earlier removal of lines. In our sample, none of the patients who had lines removed either prior to or in accordance with our proposed guideline required a line reinsertion. In addition to line reinsertion, delays in laboratory testing and reporting due to the unavailability of access, increased patient discomfort, or increased workload on the bedside nurse or vascular access team must also be considered when implementing strategies aimed at decreasing line days.

We envisage using these criteria to both empower practitioners with knowledge and foster shared accountability between all team members by using a uniform tool. This can occur through partnerships between infection control, clinical nurse specialists, bedside nursing, and physicians. The electronic medical record could be leveraged to scan the record for the criteria and create a notification when the line becomes idle. In alignment with the Michigan Appropriateness Guide for Intravenous Catheters guidelines, we do not support the removal of lines by nursing staff without physician notification.[19] Such principles have been successfully harnessed in strategies to prevent both catheter‐associated urinary tract infections and CLABSIs in ICUs.[13, 25] In light of the complexity surrounding the decision making for CVCs, our criteria were focused on the wards and erred on the side of clinical caution. This clinical conservatism is apparent in the patients in whom lines were removed prior to what our guideline would propose, yet none of the patients required a line reinsertion. As concerns about recrudescent clinical instability may drive decision making around line removal, such conservatism may be warranted initially. However, the fidelity of these criteria in the clinical setting will need prospective validation. In particular, the inclusion of SIRS criteria may have led to an overestimation of justified days. Further studies may be needed to refine the criteria and find a clinical hierarchy that balances the risks and benefits of retaining a central line.

Our work has certain limitations. It is a single center's experience, and our findings may not therefore be generalizable. Except for when the indication for the line was for difficult access, we did not attempt to verify the presence of a peripheral IV. This, in combination with the inclusion of SIRS criteria, likely leads to an underestimation of idle days. In the interest of focusing on patients in whom the decision making around a line would be the least controversial, we did not continue to follow patients who were transferred to a higher level of care. It is possible, however, that these transfers were precipitated by line‐associated complications such as sepsis and would be important to track. We did not measure the agreement between data collectors, although definitions and methodologies were standardized and reviewed prior to data collection. As this was an observational assessment of a proposed guideline, we cannot predict how the recommendations generated by it will be received by clinicians. Although this may prove to be a barrier in adoption, we hope that the conversation it initiates leads to change.

Hospitalists are positioned to potentially influence the entire life cycle of a central line on the floor. Strategies can be enacted at each stage to help decrease the potential of harm from these devices to our patients. Creating and testing criteria and guidelines such as we propose represents just 1 such strategy in a multidisciplinary effort to provide the best possible care we can.

Acknowledgements

The authors thank Jennifer Dunscomb, Kristen Kelly, and their teams, and Deanna Sidwell, Todd Biggerstaff, Joan Miller, Rob Clark, and the tireless providers at Indiana University Health Methodist Hospital for their support.

Disclosures: This work was supported by the Indiana University Health Values Grant for research. The authors have no conflicts of interests to report.

References
  1. Umscheid CA, Mitchell MD, Doshi JA, Agarwal R, Williams K, Brennan PJ. Estimating the proportion of healthcare‐associated infections that are reasonably preventable and the related mortality and costs. Infect Control Hosp Epidemiol. 2011;32(2):101114.
  2. Dudeck MA, Weiner LM, Allen‐Bridson K, et al. National Healthcare Safety Network (NHSN) report, data summary for 2012, device‐associated module. Am J Infect Control. 2013;41(12):11481166.
  3. Maki DG, Kluger DM, Crinch CJ. The risk of bloodstream infection in adults with different intravascular devices: a systematic review of 200 published prospective studies. Mayo Clin Proc. 2006;81(9):11591171.
  4. O'Grady NP, Alexander M, Burns LA, et al. Guidelines for the prevention of intravascular catheter‐related infections. Clin Infect Dis. 2011;52(9):e162e193.
  5. Chernetsky Tejedor S, Tong D, Stein J, et al. Temporary central venous catheter utilization patterns in a large tertiary care center: tracking the “idle central venous catheter.” Infect Control Hosp Epidemiol. 2012;33(1):5057.
  6. Trick WE, Vernon M, Welbel SF, Wisniewski MF, Jernigan JA, Weinstein RA. Unnecessary use of central venous catheters: the need to look outside the intensive care unit. Infect Control Hosp Epidemiol. 2004;25(3):266268.
  7. IU Health Methodist Hospital website. Available at: http://iuhealth.org/methodist/aboIut. Accessed October 20, 2014.
  8. Bone RC, Balk RA, Cerra FB, et al. Definitions for Sepsis and Organ Failure and Guidelines for the Use of Innovative Therapies in Sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 2009;136(5 suppl):e28.
  9. Pak KJ, Hu T, Fee C, Wang R, Smith M, Bazzano LA. Acute hypertension: a systematic review and appraisal of guidelines. Ochsner J. 2014;14(4):655663.
  10. Churpek MM, Yuen TC, Edelson DP. Predicting clinical deterioration in the hospital: the impact of outcome selection. Resuscitation. 2013;84(5):564568.
  11. Magill SS, Edwards JR, Bamberg W, et al. Multistate point‐prevalence survey of health care–associated infections. N Engl J Med. 2014;370(13):11981208.
  12. Klevens RM, Edwards JR, Richards CL, et al. Estimating health care‐associated infections and deaths in U.S. hospitals, 2002. Public Health Rep. 2007;122(2):160166.
  13. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter‐related bloodstream infections in the ICU. N Engl J Med. 2006;355(26):27252732.
  14. Dudeck MA, Horan TC, Peterson KD, et al. Data summary for 2011, device‐associated module. Centers for Disease Control and Prevention. National Healthcare Safety Network (NHSN) Report. Available at: http://www.cdc.gov/nhsn/PDFs/dataStat/NHSN‐Report‐2011‐Data‐Summary.pdf. Published April 1, 2013. Last accessed January 2015.
  15. Burdeu G, Currey J, Pilcher D. Idle central venous catheter‐days pose infection risk for patients after discharge from intensive care. Am J Infect Control. 2014;42(4):453455.
  16. Liang SY, Marschall J. Update on emerging infections: news from the Centers for Disease Control and Prevention. Vital signs: central line‐associated blood stream infections—United States, 2001, 2008, and 2009. Ann Emerg Med. 2011;58(5):447451.
  17. Meddings J, Rogers MAM, Krein SL, Fakih MG, Olmsted RN, Saint S. Reducing unnecessary urinary catheter use and other strategies to prevent catheter‐associated urinary tract infection: an integrative review. BMJ Qual Saf. 2014;23(4):277289.
  18. Chopra V, O'Horo JC, Rogers MAM, Maki DG, Safdar N. The risk of bloodstream infection associated with peripherally inserted central catheters compared with central venous catheters in adults: a systematic review and meta‐analysis. Infect Control Hosp Epidemiol. 2013;34(9):908918.
  19. Chopra V, Flanders SA, Saint S, et al. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA Appropriateness Method. Ann Intern Med. 2015;163(6 suppl):S1S40.
  20. Tice AD, Rehm SJ, Dalovisio JR, et al. Practice guidelines for outpatient parenteral antimicrobial therapy. IDSA guidelines. Clin Infect Dis. 2004;38(12):16511672.
  21. McLaws M‐L, Berry G. Nonuniform risk of bloodstream infection with increasing central venous catheter‐days. Infect Control Hosp Epidemiol. 2005;26(8):715719.
  22. Chopra V, Anand S, Hickner A, et al. Risk of venous thromboembolism associated with peripherally inserted central catheters: a systematic review and meta‐analysis. Lancet. 2013;382(9889):311325.
  23. Chopra V, Kuhn L, Flanders SA, Saint S, Krein SL. Hospitalist experiences, practice, opinions, and knowledge regarding peripherally inserted central catheters: results of a national survey. J Hosp Med. 2013;8(11):635638.
  24. Chopra V, Govindan S, Kuhn L, et al. Do clinicians know which of their patients have central venous catheters? Ann Intern Med. 2014;161(8):562.
  25. Reilly L, Sullivan P, Ninni S, Fochesto D, Williams K, Fetherman B. Reducing foley catheter device days in an intensive care unit: using the evidence to change practice. AACN Adv Crit Care. 2006;17(3):272283.
References
  1. Umscheid CA, Mitchell MD, Doshi JA, Agarwal R, Williams K, Brennan PJ. Estimating the proportion of healthcare‐associated infections that are reasonably preventable and the related mortality and costs. Infect Control Hosp Epidemiol. 2011;32(2):101114.
  2. Dudeck MA, Weiner LM, Allen‐Bridson K, et al. National Healthcare Safety Network (NHSN) report, data summary for 2012, device‐associated module. Am J Infect Control. 2013;41(12):11481166.
  3. Maki DG, Kluger DM, Crinch CJ. The risk of bloodstream infection in adults with different intravascular devices: a systematic review of 200 published prospective studies. Mayo Clin Proc. 2006;81(9):11591171.
  4. O'Grady NP, Alexander M, Burns LA, et al. Guidelines for the prevention of intravascular catheter‐related infections. Clin Infect Dis. 2011;52(9):e162e193.
  5. Chernetsky Tejedor S, Tong D, Stein J, et al. Temporary central venous catheter utilization patterns in a large tertiary care center: tracking the “idle central venous catheter.” Infect Control Hosp Epidemiol. 2012;33(1):5057.
  6. Trick WE, Vernon M, Welbel SF, Wisniewski MF, Jernigan JA, Weinstein RA. Unnecessary use of central venous catheters: the need to look outside the intensive care unit. Infect Control Hosp Epidemiol. 2004;25(3):266268.
  7. IU Health Methodist Hospital website. Available at: http://iuhealth.org/methodist/aboIut. Accessed October 20, 2014.
  8. Bone RC, Balk RA, Cerra FB, et al. Definitions for Sepsis and Organ Failure and Guidelines for the Use of Innovative Therapies in Sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 2009;136(5 suppl):e28.
  9. Pak KJ, Hu T, Fee C, Wang R, Smith M, Bazzano LA. Acute hypertension: a systematic review and appraisal of guidelines. Ochsner J. 2014;14(4):655663.
  10. Churpek MM, Yuen TC, Edelson DP. Predicting clinical deterioration in the hospital: the impact of outcome selection. Resuscitation. 2013;84(5):564568.
  11. Magill SS, Edwards JR, Bamberg W, et al. Multistate point‐prevalence survey of health care–associated infections. N Engl J Med. 2014;370(13):11981208.
  12. Klevens RM, Edwards JR, Richards CL, et al. Estimating health care‐associated infections and deaths in U.S. hospitals, 2002. Public Health Rep. 2007;122(2):160166.
  13. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter‐related bloodstream infections in the ICU. N Engl J Med. 2006;355(26):27252732.
  14. Dudeck MA, Horan TC, Peterson KD, et al. Data summary for 2011, device‐associated module. Centers for Disease Control and Prevention. National Healthcare Safety Network (NHSN) Report. Available at: http://www.cdc.gov/nhsn/PDFs/dataStat/NHSN‐Report‐2011‐Data‐Summary.pdf. Published April 1, 2013. Last accessed January 2015.
  15. Burdeu G, Currey J, Pilcher D. Idle central venous catheter‐days pose infection risk for patients after discharge from intensive care. Am J Infect Control. 2014;42(4):453455.
  16. Liang SY, Marschall J. Update on emerging infections: news from the Centers for Disease Control and Prevention. Vital signs: central line‐associated blood stream infections—United States, 2001, 2008, and 2009. Ann Emerg Med. 2011;58(5):447451.
  17. Meddings J, Rogers MAM, Krein SL, Fakih MG, Olmsted RN, Saint S. Reducing unnecessary urinary catheter use and other strategies to prevent catheter‐associated urinary tract infection: an integrative review. BMJ Qual Saf. 2014;23(4):277289.
  18. Chopra V, O'Horo JC, Rogers MAM, Maki DG, Safdar N. The risk of bloodstream infection associated with peripherally inserted central catheters compared with central venous catheters in adults: a systematic review and meta‐analysis. Infect Control Hosp Epidemiol. 2013;34(9):908918.
  19. Chopra V, Flanders SA, Saint S, et al. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA Appropriateness Method. Ann Intern Med. 2015;163(6 suppl):S1S40.
  20. Tice AD, Rehm SJ, Dalovisio JR, et al. Practice guidelines for outpatient parenteral antimicrobial therapy. IDSA guidelines. Clin Infect Dis. 2004;38(12):16511672.
  21. McLaws M‐L, Berry G. Nonuniform risk of bloodstream infection with increasing central venous catheter‐days. Infect Control Hosp Epidemiol. 2005;26(8):715719.
  22. Chopra V, Anand S, Hickner A, et al. Risk of venous thromboembolism associated with peripherally inserted central catheters: a systematic review and meta‐analysis. Lancet. 2013;382(9889):311325.
  23. Chopra V, Kuhn L, Flanders SA, Saint S, Krein SL. Hospitalist experiences, practice, opinions, and knowledge regarding peripherally inserted central catheters: results of a national survey. J Hosp Med. 2013;8(11):635638.
  24. Chopra V, Govindan S, Kuhn L, et al. Do clinicians know which of their patients have central venous catheters? Ann Intern Med. 2014;161(8):562.
  25. Reilly L, Sullivan P, Ninni S, Fochesto D, Williams K, Fetherman B. Reducing foley catheter device days in an intensive care unit: using the evidence to change practice. AACN Adv Crit Care. 2006;17(3):272283.
Issue
Journal of Hospital Medicine - 11(7)
Issue
Journal of Hospital Medicine - 11(7)
Page Number
489-493
Page Number
489-493
Article Type
Display Headline
Can the identification of an idle line facilitate its removal? A comparison between a proposed guideline and clinical practice
Display Headline
Can the identification of an idle line facilitate its removal? A comparison between a proposed guideline and clinical practice
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Areeba Kara, MD, Inpatient Medicine, Indiana University Health Physicians, Indiana University School of Medicine, Noyes Pavilion Suite 640, 1701 N Senate Avenue, Indianapolis, IN 46202‐1239; Telephone: 317‐962‐2894; Fax number 317‐963‐5285; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Discordance Between Patient and Provider

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
How often are hospitalized patients and providers on the same page with regard to the patient's primary recovery goal for hospitalization?

Patient‐centered care has been recognized by the Institute of Medicine as an essential aim of the US healthcare system.[1] A fundamental component of patient‐centered care is to engage patients and caregivers in establishing preferences, needs, values, and overall goals regarding their care.[1] Prior studies have shown that delivering high‐quality patient‐centered care is associated with improved health outcomes, and in some cases, reduced costs.[2, 3, 4, 5, 6, 7] Payors, including the Centers for Medicare and Medicaid Services under the Hospital Value‐Based Purchasing program, are increasingly tying payments to measures of patient experience.[8, 9] As more emphasis is placed on public reporting of these patient‐reported outcomes, healthcare organizations are investing in efforts to engage patients and caregivers, including efforts at establishing patients' preferences for care.[10]

In the acute care setting, a prerequisite for high‐quality patient‐centered care is identifying a patient's primary goal for recovery and then delivering care consistent with that goal.[11, 12, 13] Haberle et al. previously validated patients' most common goals for recovery in the hospital setting into 7 broad categories: (1) be cured, (2) live longer, (3) improve or maintain health, (4) be comfortable, (5) accomplish a particular life goal, (6) provide support for a family member, or (7) other.[13] When providers' understanding of these recovery goals are not concordant with the patient's stated goals, patients may receive care inconsistent with their preferences; it is not uncommon for patients to receive aggressive curative treatments (eg, cardiopulmonary resuscitation) when they have expressed otherwise.[14] On the other hand, when patient goals and priorities are clearly established, patients may have better outcomes.[15] For example, earlier conversations about patient goals and priorities in serious illness can lead to realistic expectations of treatment, enhanced goal‐concordant care, improved quality of life, higher patient satisfaction, more and earlier hospice care, fewer hospitalizations, better patient and family coping, reduced burden of decision making for families, and improved bereavement outcomes.[16, 17, 18]

Although previous studies have suggested poor patient‐physician concordance with regard to the patient's plan of care,[19, 20, 21, 22, 23, 24] there are limited data regarding providers' understanding of the patient's primary recovery goal during hospitalization. The purpose of this study was to identify the patients' Haberle goal, and then determine the degree of concordance among patients and key hospital providers regarding this goal.

METHODS

Study Setting

The Partners Human Research Committee approved the study. The study was conducted on an oncology and medical intensive care unit (MICU) at a major academic medical center in Boston, Massachusetts. The oncology unit was comprised of 2 non‐localized medical teams caring for patients admitted to that unit. The MICU was comprised of a single localized medical team. Medical teams working on these units consisted of a first responder (eg, intern or a physician assistant [PA]), medical residents, and an attending physician. Both units had dedicated nursing staff.

Study Participants

All adult patients (>17 years of age) admitted to the oncology and MICU units during the study period (November 2013 through May 2014) were eligible. These units were chosen because these patients are typically complex and have multiple medical comorbidities longer lengths of stay, and many procedures and tests. In addition, a standard method for asking patients to identify a primary recovery goal for hospitalization aligned well with ongoing institutional efforts to engage these patients in goals of care discussions.

Research assistants identified all patients admitted to each study unit for at least 48 hours and approached them in a random order with a daily target of 2 to 3 patients. Only patients who demonstrated capacity (determined by medical team), or had a legally designated healthcare proxy (who spoke English and was available to participate on their behalf) were included. Research assistants then approached the patient's nurse and a physician provider (defined for this study as housestaff physician, PA, or attending) from the primary medical team to participate in the interview (within 24 hours of patient's interview). We excluded eligible patients who did not have capacity or an available caregiver or declined to participate.

Data Collection Instrument and Interviews

Research assistants administered a validated questionnaire developed by Haberle et al. to participants after 48 hours into the patient's admission to provide time to establish mutual understanding of the diagnosis and prognosis.[13] We asked patients (or the designated healthcare proxy) to select their single, most important Haberle goal (see above). Specifically, as in the original validation study,[13] patients or proxies were asked the following question: Please tell me your most important goal of care for this hospitalization. If they did not understand this question, we asked a follow‐up question: What are you expecting will be accomplished during this hospitalization? Within 24 hours of the patient/proxy interview, we independently asked the patient's nurse and physician to select what they thought was the patient's most important goal for recovery using the same questionnaire, adapted for providers. In each case, all participants were blinded to the responses of others.

Measures

We measured the frequency that each participant (patient/proxy, nurse, and physician) selected a specific Haberle recovery goal across all patients. We measured the rate of pairwise concordance by recovery goal for each participant dyad (patient/proxy‐nurse, patient/proxy‐physician, and nurse‐physician). Finally, we calculated the frequency of cases for which all 3 participants selected the same recovery goal.

Statistical Analyses

Descriptive statistics were used to report patient demographic data. The frequencies of selected responses were calculated and reported as percentages for each type of participant. The differences in rate of responses for each Haberle goal were compared across each participant group using 2 analysis. We then performed 2‐way Kappa statistical tests to measure inter‐rater agreement for each dyad.

RESULTS

Of 1436 patients (882 oncology, 554 MICU) hospitalized during the study period, 341(156 oncology, 185 MICU) were admitted for <48 hours. Of 914 potentially eligible patients (617 oncology, 297 MICU), 191 (112 oncology and 79 MICU) were approached to participate based on our sampling strategy; of these, 8 (2 oncology and 6 MICU) did not have capacity (and no proxy was available) and 2 (1 oncology and 1 MICU) declined. Of the remaining 181 patients (109 oncology and 72 MICU), we obtained a completed questionnaire from all 3 interviewees on 109 (60.2% response rate).

Of the 109 study patients, 52 (47.7%) and 57 (52.3%) were admitted to the oncology and medical intensive care units, respectively (Table 1). Patients were predominantly middle aged, Caucasian, English‐speaking, and college‐educated. Healthcare proxies were frequently interviewed on behalf of patients in the MICU. Housestaff physicians were more often interviewed in the MICU, and PAs were interviewed only on oncology units. Compared to patient responders, nonresponders tended to be male and were admitted to oncology units (see Supporting Table 1 in the online version of this article).

Patient Characteristics
Characteristics All Patients Admitted to Medical Intensive Care Units Admitted to Oncology Units
  • NOTE: Abbreviations: SD, standard deviation. *Patients were interviewed as part of a unique patient admission.

Total, no. (%) 109 (100%) 57 (52.3%) 52 (47.7%)
Gender, no. (%)
Male 55 (50.5%) 28 (49.1%) 26 (50.0%)
Female 54 (49.5%) 29 (50.9%) 26 (50.0%)
Age, y, mean SD 59.4 14 59.7 15 59.1 13
Median 61 61 60
Range 2188 2188 2285
Race, no. (%)
White 103 (94.5%) 53 (93.0%) 50 (96.2%)
Other 6 (5.5%) 4 (7.0%) 2 (3.8%)
Language, no. (%)
English 106 (97.2%) 56 (98.1%) 50 (96.2%)
Other 3 (2.8%) 1 (1.9%) 2 (3.8%)
Education level, no. (%)
Less than high school 30 (27.5%) 17 (29.8%) 13 (25.0%)
High school diploma 27 (24.5%) 18 (31.6%) 9 (17.3%)
Some college or beyond 52 (47.7%) 22 (38.6%) 30 (57.7%)
Patient or caregiver interviewed, no. (%)
Patient 68 (62.4%) 27 (47.4%) 48 (92.3%)
Caregiver 41 (37.6%) 30 (52.6%) 4 (7.7%)
Nurse interviewed, no. (unique) 109 (75) 57 (42) 52 (33)
Physician provider interviewed, no. (%); no. unique
Attending 27 (24.8%); 20 15 (26.3%); 10 12 (23.1%); 10
Housestaff 48 (44.0%); 39 42 (73.7%); 33 6 (11.5%); 6
Physician assistant 34 (31.2%); 25 0 (0%); 0 34 (65.4%); 25

The frequencies of selected Haberle recovery goals by participant type across all patients are listed in Table 2. Patients (or proxies) most often selected be cured (46.8%). Assigned nurses and physicians more commonly selected improve or maintain health (38.5% and 46.8%, respectively). Be comfortable was selected by nurses and physicians more frequently than by patients (16.5%, 16.5%, and 8.3%, respectively). The rate of responses for each Haberle goal was significantly different across all respondent groups (P < 0.0001). The frequencies of selected Haberle goals were not significantly different between patients or proxies (P = 0.67), or for patients admitted to the MICU compared to oncology units (P = 0.64).

Primary Recovery Goal Reported by Patient, Physician Provider, and Nurse
Haberle Recovery Goal Patient/Caregiver, no. (%), n = 109 Physician Provider, no. (%), n = 109* Nurse, no. (%), n = 109
  • NOTE: *Physician provider is defined as either a housestaff physician, physician assistant, or attending physician for the purposes of this study.

Be cured 51 (46.8%) 20 (18.3%) 20 (18.3%)
Be comfortable 9 (8.3%) 18 (16.5%) 18 (16.5%)
Improve or maintain health 32 (29.4%) 42 (38.5%) 51 (46.8%)
Live longer 14 (12.8%) 21 (19.3%) 12 (11%)
Accomplish personal goal 2 (1.8%) 0 (0%) 3 (2.8%)
Provide support for family 1 (0.9%) 1 (0.9%) 1 (0.9%)
Other 0 (0%) 7 (6.4%) 4 (3.7%)

Inter‐rater agreement was poor to slight for the 3 participant dyads (kappa 0.09 [0.03‐0.19], 0.19 [0.08‐0.30], and 0.20 [0.08‐0.32] for patient‐physician, patient‐nurse, and nurse‐physician, respectively). The 3 participants selected the identical recovery goal in 22 (20.2%) cases, and each selected a distinct recovery goal in 32 (29.4%) cases. Pairwise concordance between nurses and physicians was 39.4%. There were no significant differences in agreement between patients admitted to the MICU compared to oncology units (P = 0.09).

DISCUSSION

We observed poor to slight concordance among patients and key hospital providers with regard to identifying the patient's primary recovery goal during acute hospitalization. The majority of patients (or proxies), chose be cured, whereas the majority of hospital providers chose improve or maintain health. Patients were twice as likely to select be cured and half as likely to choose be comfortable compared to nurses or physicians. Strikingly, the patient (or proxy), nurse, and physician identified the same recovery goal in just 20% of cases. These findings were similar for patients admitted to either the MICU or oncology units or when healthcare proxies participated on behalf of the patient (eg, when incapacitated in the MICU).

There are many reasons why hospital providers may not correctly identify the patients' primary recovery goals. First, we do not routinely ask patients to identify recovery goals upon admission in a structured and standardized manner. In fact, clinicians often do not elicit patients' needs, concerns, and expectations regarding their care in general.[25] Second, even when recovery goals are elicited at admission, they may not be communicated effectively to all members of the care team. This could be due to geographically non‐localized teams (although we did not observe a statistically significant difference between regionalized MICU and nonregionalized oncology care units), frequent provider‐to‐provider handoffs, and siloed electronic communication (eg, email, alphanumeric pages) regarding goals of care that inevitably leaves out key providers.[26] Third, healthcare proxies who are involved in decision making on the patient's behalf may not always be available to meet with the care team in person; consequently, their input may not be considered in a timely manner or reliably communicated to all members of the care team. We observed a large discrepancy in how often patients chose be cured compared to their hospital providers. This could be explained by clinicians' unwillingness to disclose bad news or divulge accurate prognostic information that causes patients to feel depressed or lose hope, particularly for those patients with the worst prognoses.[16, 27, 28] Patients may lack sophisticated knowledge of their conditions for a variety of reasons, including low health literacy, at times choosing to hope for the best even when it is not realistic. Additionally, there may be more subtle differences in what patients and hospital providers consider the primary recovery goal in context of the main reason for hospitalization and underlying medical illness. For example, a patient with metastatic lung cancer hospitalized with recurrent postobstructive pneumonia may choose be cured as his/her primary recovery goal (thinking of the pneumonia), whereas physicians may choose improve/maintain health or comfort (thinking of the cancer). We also cannot exclude the possibility that sometimes when patients state be cured and clinicians state improve health as the primary goal, that they are really saying the same thing in different ways. However, these are 2 different constructs (cure may not be possible for many patients) that may deserve an explicit discussion for patients to have realistic expectations for their health following hospitalization.

In short, our results underscore the importance of having an open and honest dialog with patients and caregivers throughout hospitalization, and the need to provide education about the potential futility of excessive care in situations where appropriate. Simply following patients' goals without discussing their feasibility and the consequences of aggressive treatments may result in unnecessary morbidity and misuse of healthcare resources. Once goals are clearly established, communicated, and refined in hospitalized patients with serious illness, there is much reason to believe that ongoing conversation will favorably impact outcomes.[29]

We found few studies that rigorously quantified the rate of concordance of hospital recovery goals among patients and key hospital providers; however, studies that measured overall plan of care agreement have demonstrated suboptimal concordance.[20, 30, 31] Shin et al. found significant underestimation of cancer patients' needs and poor concordance between patients and oncologists in assessing perceived needs of supportive care.[20] It is also notable that nurses and physicians had low levels of concordance in our study. O'Leary and colleagues found that nurses and physicians did not reliably communicate and often did not agree on the plan of care for hospitalized patients.[30] Although geographic regionalization of care teams and multidisciplinary rounds can improve the likelihood that key members of the care team are on the same page with regard to the plan of care, there is still much room for improvement.[26, 32, 33, 34] For example, although nurses and physicians in our study independently selected individual recovery goals with similar frequencies (Table 2), we observed suboptimal concordance between nurses and providers (36.8%) for specific patients, including on our regionalized care unit (MICU). This may be due to the reasons described above.

There are several implications of these findings. As payors continue to shift payments toward value‐based metrics, largely determined by patient experience and adequate advance care planning,[9] our findings suggest that more effort should be focused on delivering care consistent with patients' primary recovery goals. As a first step, healthcare organizations can focus on efforts to systematically identify and communicate recovery goals to all members of the care team, ensuring that patients' preferences, needs, and values are captured. In addition, as innovation in patient engagement and care delivery using Web‐based and mobile technology continues to grow,[35] using these tools to capture key goals for hospitalization and recovery can play an essential role. For example, as electronic health record vendors and institutions start to implement patient portals in the acute care setting, they should consider how to configure these tools to capture key goals for hospitalization and recovery, and then communicate them to the care team; preliminary work in this area is promising.[10]

Our study has several limitations to generalizability. First, the study was conducted on 2 services (MICU and oncology) at a single institution using a sampling strategy where research assistants enrolled 2 to 3 patients per day. Although the sampling was random, the availability of patients and proxies to be interviewed may have led to selection bias. Second, the sample size was small. Third, the patients who participated were predominantly white, English‐speaking, and well educated, possibly a consequence of our sampling strategy. However, this fact makes our findings more striking; although cultural and language barriers were generally not present in our study population, large discrepancies in goal concordance still existed. Fourth, in instances when patients were unable to participate themselves, we interviewed their healthcare proxy; therefore, it is possible that the proxies' responses did not reflect those of the patient. However, we note that concordance rates did not significantly differ between the 2 services despite the fact that the proportion of proxy interviews was much higher in the MICU. Similarly, we cannot exclude the possibility that patients altered their stated goals in the presence of proxies, but patients were given the option to be interviewed alone. Patients may also have misunderstood the timing of the goals (during this hospitalization as opposed to long term), although research assistants made every effort to clarify this during the interviews. Finally, our data‐collection instrument was previously validated in hospitalized general medicine patients and not oncology or MICU patients, and it has not been used to directly ask clinicians to identify patients' recovery goals. However, there is no reason to suspect that it could not be used for this purpose in critical care as well as noncritical care settings, as the survey was developed by a multidisciplinary team that included medical professionals and was validated by clinicians who successfully identified a single, very broad goal (eg, be cured) in each case.

CONCLUSION

We report poor to slight concordance among hospitalized patients and key hospital providers with regard to the main recovery goal. Future studies should assess whether patient satisfaction and experience is adversely impacted by patient‐provider discordance regarding key recovery goals. Additionally, institutions may consider future efforts to elicit and communicate patients' primary recovery goals more effectively to all members of the care team, and address discrepancies as soon as they are discovered.

Disclosures

This work was supported by a grant from the Gordon and Betty Moore Foundation (GBMF) (grant GBMF3914). GBMF had no role in the design or conduct of the study; collection, analysis, or interpretation of data; or preparation or review of the manuscript. The authors report no conflicts of interest.

Files
References
  1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy of Sciences; 2001.
  2. Bartlett EE, Grayson M, Barker R, Levine DM, Golden A, Libber S. The effects of physician communications skills on patient satisfaction; recall, and adherence. J Chronic Dis. 1984;37(9–10):755764.
  3. Little P, Everitt H, Williamson I, et al. Observational study of effect of patient centredness and positive approach on outcomes of general practice consultations. BMJ. 2001;323(7318):908911.
  4. Clancy CM. Reengineering hospital discharge: a protocol to improve patient safety, reduce costs, and boost patient satisfaction. Am J Med Qual. 2009;24(4):344346.
  5. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):4148.
  6. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):19211931.
  7. Veroff D, Marr A, Wennberg DE. Enhanced support for shared decision making reduced costs of care for patients with preference‐sensitive conditions. Health Aff (Millwood). 2013;32(2):285293.
  8. Centers for Medicare and Medicaid Services. Medicare program; hospital inpatient value‐based purchasing program. Final rule. Fed Regist. 2011;76(88):2649026547.
  9. Centers for Medicare and Medicaid Services. CMS begins implementation of key payment legislation. Available at: https://www.cms.gov/Newsroom/MediaReleaseDatabase/Press‐releases/2015‐Press‐releases‐items/2015‐07‐08.html. Published July 8, 2015.
  10. Dalal AK, Dykes PC, Collins S, et al. A web‐based, patient‐centered toolkit to engage patients and caregivers in the acute care setting: a preliminary evaluation [published online August 2, 2015]. J Am Med Informatics Assoc. doi: 10.1093/jamia/ocv093.
  11. Daly BJ, Douglas SL, O'Toole E, et al. Effectiveness trial of an intensive communication structure for families of long‐stay ICU patients. Chest. 2010;138(6):13401348.
  12. Brandt DS, Shinkunas LA, Gehlbach TG, Kaldjian LC. Understanding goals of care statements and preferences among patients and their surrogates in the medical ICU. J Hosp Palliat Nurs. 2012;14(2):126132.
  13. Haberle TH, Shinkunas LA, Erekson ZD, Kaldjian LC. Goals of care among hospitalized patients: a validation study. Am J Hosp Palliat Care. 2011;28(5):335341.
  14. Goodlin SJ, Zhong Z, Lynn J, et al. Factors associated with use of cardiopulmonary resuscitation in seriously ill hospitalized adults. JAMA. 1999;282(24):23332339.
  15. Mack JW, Weeks JC, Wright AA, Block SD, Prigerson HG. End‐of‐life discussions, goal attainment, and distress at the end of life: Predictors and outcomes of receipt of care consistent with preferences. J Clin Oncol. 2010;28(7):12031208.
  16. Mack JW, Smith TJ. Reasons why physicians do not have discussions about poor prognosis, why it matters, and what can be improved. J Clin Oncol. 2012;30(22):27152717.
  17. Wright AA, Zhang B, Ray A, et al. Associations between end‐of‐life discussions, patient mental health, medical care near death, and caregiver bereavement adjustment. JAMA. 2008;300(14):16651673.
  18. Chiarchiaro J, Buddadhumaruk P, Arnold RM, White DB. Prior advance care planning is associated with less decisional conflict among surrogates for critically ill patients. Ann Am Thorac Soc. 2015;12(10):15281533.
  19. O'Leary KJ, Kulkarni N, Landler MP, et al. Hospitalized patients' understanding of their plan of care. Mayo Clin Proc. 2010;85(1):4752.
  20. Shin DW, Kim SY, Cho J, et al. Discordance in perceived needs between patients and physicians in oncology practice: a nationwide survey in Korea. J Clin Oncol. 2011;29(33):44244429.
  21. Dykes PC, DaDamio RR, Goldsmith D, Kim H, Ohashi K, Saba VK. Leveraging standards to support patient‐centric interdisciplinary plans of care. AMIA Annu Symp Proc. 2011;2011:356363.
  22. Desalvo KB, Muntner P. Discordance between physician and patient self‐rated health and all‐cause mortality. Ochsner J. 2011;11(3):232240.
  23. Yen JC, Abrahamowicz M, Dobkin PL, Clarke AE, Battista RN, Fortin PR. Determinants of discordance between patients and physicians in their assessment of lupus disease activity. J Rheumatol. 2003;30(9):19671976.
  24. Rothe A, Bielitzer M, Meinertz T, Limbourg T, Ladwig KH, Goette A. Predictors of discordance between physicians' and patients' appraisals of health‐related quality of life in atrial fibrillation patients: Findings from the Angiotensin II Antagonist in Paroxysmal Atrial Fibrillation Trial. Am Heart J. 2013;166(3):589596.
  25. Rozenblum R, Lisby M, Hockey PM, et al. Uncovering the blind spot of patient satisfaction: an international survey. BMJ Qual Saf. 2011;20(11):959965.
  26. O'Leary KJ, Haviley C, Slade ME, Shah HM, Lee J, Williams M. Improving teamwork: impact of structured interdisciplinary rounds on a hospitalist unit. J Hosp Med. 2011;6(2):8893.
  27. Lee SJ, Fairclough D, Antin JH, Weeks JC. Discrepancies between patient and physician estimates for the success of stem cell transplantation. JAMA. 2001;285(8):10341038.
  28. Lee SJ, Loberiza FR, Rizzo JD, Soiffer RJ, Antin JH, Weeks JC. Optimistic expectations and survival after hematopoietic stem cell transplantation. Biol Blood Marrow Transplant. 2003;9(6):389396.
  29. Bernacki R, Hutchings M, Vick J, et al. Development of the Serious Illness Care Program: a randomised controlled trial of a palliative care communication intervention. BMJ Open. 2015;5(10):e009032.
  30. O'Leary KJ, Thompson JA, Landler MP, et al. Patterns of nurse‐physician communication and agreement on the plan of care. Qual Saf Health Care. 2010;19(3):195199.
  31. O'Leary KJ, Wayne DB, Landler MP, et al. Impact of localizing physicians to hospital units on nurse‐physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  32. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
  33. Stein J, Payne C, Methvin A, et al. Reorganizing a hospital ward as an accountable care unit. J Hosp Med. 2015;10(1):3640.
  34. O'Leary KJ, Sehgal NL, Terrell G, Williams M. Interdisciplinary teamwork in hospitals: a review and practical recommendations for improvement. J Hosp Med. 2012;7(1):4854.
  35. Sama PR, Eapen ZJ, Weinfurt KP, Shah BR, Schulman KA. An evaluation of mobile health application tools. JMIR mHealth uHealth. 2014;2(2):e19.
Article PDF
Issue
Journal of Hospital Medicine - 11(9)
Page Number
615-619
Sections
Files
Files
Article PDF
Article PDF

Patient‐centered care has been recognized by the Institute of Medicine as an essential aim of the US healthcare system.[1] A fundamental component of patient‐centered care is to engage patients and caregivers in establishing preferences, needs, values, and overall goals regarding their care.[1] Prior studies have shown that delivering high‐quality patient‐centered care is associated with improved health outcomes, and in some cases, reduced costs.[2, 3, 4, 5, 6, 7] Payors, including the Centers for Medicare and Medicaid Services under the Hospital Value‐Based Purchasing program, are increasingly tying payments to measures of patient experience.[8, 9] As more emphasis is placed on public reporting of these patient‐reported outcomes, healthcare organizations are investing in efforts to engage patients and caregivers, including efforts at establishing patients' preferences for care.[10]

In the acute care setting, a prerequisite for high‐quality patient‐centered care is identifying a patient's primary goal for recovery and then delivering care consistent with that goal.[11, 12, 13] Haberle et al. previously validated patients' most common goals for recovery in the hospital setting into 7 broad categories: (1) be cured, (2) live longer, (3) improve or maintain health, (4) be comfortable, (5) accomplish a particular life goal, (6) provide support for a family member, or (7) other.[13] When providers' understanding of these recovery goals are not concordant with the patient's stated goals, patients may receive care inconsistent with their preferences; it is not uncommon for patients to receive aggressive curative treatments (eg, cardiopulmonary resuscitation) when they have expressed otherwise.[14] On the other hand, when patient goals and priorities are clearly established, patients may have better outcomes.[15] For example, earlier conversations about patient goals and priorities in serious illness can lead to realistic expectations of treatment, enhanced goal‐concordant care, improved quality of life, higher patient satisfaction, more and earlier hospice care, fewer hospitalizations, better patient and family coping, reduced burden of decision making for families, and improved bereavement outcomes.[16, 17, 18]

Although previous studies have suggested poor patient‐physician concordance with regard to the patient's plan of care,[19, 20, 21, 22, 23, 24] there are limited data regarding providers' understanding of the patient's primary recovery goal during hospitalization. The purpose of this study was to identify the patients' Haberle goal, and then determine the degree of concordance among patients and key hospital providers regarding this goal.

METHODS

Study Setting

The Partners Human Research Committee approved the study. The study was conducted on an oncology and medical intensive care unit (MICU) at a major academic medical center in Boston, Massachusetts. The oncology unit was comprised of 2 non‐localized medical teams caring for patients admitted to that unit. The MICU was comprised of a single localized medical team. Medical teams working on these units consisted of a first responder (eg, intern or a physician assistant [PA]), medical residents, and an attending physician. Both units had dedicated nursing staff.

Study Participants

All adult patients (>17 years of age) admitted to the oncology and MICU units during the study period (November 2013 through May 2014) were eligible. These units were chosen because these patients are typically complex and have multiple medical comorbidities longer lengths of stay, and many procedures and tests. In addition, a standard method for asking patients to identify a primary recovery goal for hospitalization aligned well with ongoing institutional efforts to engage these patients in goals of care discussions.

Research assistants identified all patients admitted to each study unit for at least 48 hours and approached them in a random order with a daily target of 2 to 3 patients. Only patients who demonstrated capacity (determined by medical team), or had a legally designated healthcare proxy (who spoke English and was available to participate on their behalf) were included. Research assistants then approached the patient's nurse and a physician provider (defined for this study as housestaff physician, PA, or attending) from the primary medical team to participate in the interview (within 24 hours of patient's interview). We excluded eligible patients who did not have capacity or an available caregiver or declined to participate.

Data Collection Instrument and Interviews

Research assistants administered a validated questionnaire developed by Haberle et al. to participants after 48 hours into the patient's admission to provide time to establish mutual understanding of the diagnosis and prognosis.[13] We asked patients (or the designated healthcare proxy) to select their single, most important Haberle goal (see above). Specifically, as in the original validation study,[13] patients or proxies were asked the following question: Please tell me your most important goal of care for this hospitalization. If they did not understand this question, we asked a follow‐up question: What are you expecting will be accomplished during this hospitalization? Within 24 hours of the patient/proxy interview, we independently asked the patient's nurse and physician to select what they thought was the patient's most important goal for recovery using the same questionnaire, adapted for providers. In each case, all participants were blinded to the responses of others.

Measures

We measured the frequency that each participant (patient/proxy, nurse, and physician) selected a specific Haberle recovery goal across all patients. We measured the rate of pairwise concordance by recovery goal for each participant dyad (patient/proxy‐nurse, patient/proxy‐physician, and nurse‐physician). Finally, we calculated the frequency of cases for which all 3 participants selected the same recovery goal.

Statistical Analyses

Descriptive statistics were used to report patient demographic data. The frequencies of selected responses were calculated and reported as percentages for each type of participant. The differences in rate of responses for each Haberle goal were compared across each participant group using 2 analysis. We then performed 2‐way Kappa statistical tests to measure inter‐rater agreement for each dyad.

RESULTS

Of 1436 patients (882 oncology, 554 MICU) hospitalized during the study period, 341(156 oncology, 185 MICU) were admitted for <48 hours. Of 914 potentially eligible patients (617 oncology, 297 MICU), 191 (112 oncology and 79 MICU) were approached to participate based on our sampling strategy; of these, 8 (2 oncology and 6 MICU) did not have capacity (and no proxy was available) and 2 (1 oncology and 1 MICU) declined. Of the remaining 181 patients (109 oncology and 72 MICU), we obtained a completed questionnaire from all 3 interviewees on 109 (60.2% response rate).

Of the 109 study patients, 52 (47.7%) and 57 (52.3%) were admitted to the oncology and medical intensive care units, respectively (Table 1). Patients were predominantly middle aged, Caucasian, English‐speaking, and college‐educated. Healthcare proxies were frequently interviewed on behalf of patients in the MICU. Housestaff physicians were more often interviewed in the MICU, and PAs were interviewed only on oncology units. Compared to patient responders, nonresponders tended to be male and were admitted to oncology units (see Supporting Table 1 in the online version of this article).

Patient Characteristics
Characteristics All Patients Admitted to Medical Intensive Care Units Admitted to Oncology Units
  • NOTE: Abbreviations: SD, standard deviation. *Patients were interviewed as part of a unique patient admission.

Total, no. (%) 109 (100%) 57 (52.3%) 52 (47.7%)
Gender, no. (%)
Male 55 (50.5%) 28 (49.1%) 26 (50.0%)
Female 54 (49.5%) 29 (50.9%) 26 (50.0%)
Age, y, mean SD 59.4 14 59.7 15 59.1 13
Median 61 61 60
Range 2188 2188 2285
Race, no. (%)
White 103 (94.5%) 53 (93.0%) 50 (96.2%)
Other 6 (5.5%) 4 (7.0%) 2 (3.8%)
Language, no. (%)
English 106 (97.2%) 56 (98.1%) 50 (96.2%)
Other 3 (2.8%) 1 (1.9%) 2 (3.8%)
Education level, no. (%)
Less than high school 30 (27.5%) 17 (29.8%) 13 (25.0%)
High school diploma 27 (24.5%) 18 (31.6%) 9 (17.3%)
Some college or beyond 52 (47.7%) 22 (38.6%) 30 (57.7%)
Patient or caregiver interviewed, no. (%)
Patient 68 (62.4%) 27 (47.4%) 48 (92.3%)
Caregiver 41 (37.6%) 30 (52.6%) 4 (7.7%)
Nurse interviewed, no. (unique) 109 (75) 57 (42) 52 (33)
Physician provider interviewed, no. (%); no. unique
Attending 27 (24.8%); 20 15 (26.3%); 10 12 (23.1%); 10
Housestaff 48 (44.0%); 39 42 (73.7%); 33 6 (11.5%); 6
Physician assistant 34 (31.2%); 25 0 (0%); 0 34 (65.4%); 25

The frequencies of selected Haberle recovery goals by participant type across all patients are listed in Table 2. Patients (or proxies) most often selected be cured (46.8%). Assigned nurses and physicians more commonly selected improve or maintain health (38.5% and 46.8%, respectively). Be comfortable was selected by nurses and physicians more frequently than by patients (16.5%, 16.5%, and 8.3%, respectively). The rate of responses for each Haberle goal was significantly different across all respondent groups (P < 0.0001). The frequencies of selected Haberle goals were not significantly different between patients or proxies (P = 0.67), or for patients admitted to the MICU compared to oncology units (P = 0.64).

Primary Recovery Goal Reported by Patient, Physician Provider, and Nurse
Haberle Recovery Goal Patient/Caregiver, no. (%), n = 109 Physician Provider, no. (%), n = 109* Nurse, no. (%), n = 109
  • NOTE: *Physician provider is defined as either a housestaff physician, physician assistant, or attending physician for the purposes of this study.

Be cured 51 (46.8%) 20 (18.3%) 20 (18.3%)
Be comfortable 9 (8.3%) 18 (16.5%) 18 (16.5%)
Improve or maintain health 32 (29.4%) 42 (38.5%) 51 (46.8%)
Live longer 14 (12.8%) 21 (19.3%) 12 (11%)
Accomplish personal goal 2 (1.8%) 0 (0%) 3 (2.8%)
Provide support for family 1 (0.9%) 1 (0.9%) 1 (0.9%)
Other 0 (0%) 7 (6.4%) 4 (3.7%)

Inter‐rater agreement was poor to slight for the 3 participant dyads (kappa 0.09 [0.03‐0.19], 0.19 [0.08‐0.30], and 0.20 [0.08‐0.32] for patient‐physician, patient‐nurse, and nurse‐physician, respectively). The 3 participants selected the identical recovery goal in 22 (20.2%) cases, and each selected a distinct recovery goal in 32 (29.4%) cases. Pairwise concordance between nurses and physicians was 39.4%. There were no significant differences in agreement between patients admitted to the MICU compared to oncology units (P = 0.09).

DISCUSSION

We observed poor to slight concordance among patients and key hospital providers with regard to identifying the patient's primary recovery goal during acute hospitalization. The majority of patients (or proxies), chose be cured, whereas the majority of hospital providers chose improve or maintain health. Patients were twice as likely to select be cured and half as likely to choose be comfortable compared to nurses or physicians. Strikingly, the patient (or proxy), nurse, and physician identified the same recovery goal in just 20% of cases. These findings were similar for patients admitted to either the MICU or oncology units or when healthcare proxies participated on behalf of the patient (eg, when incapacitated in the MICU).

There are many reasons why hospital providers may not correctly identify the patients' primary recovery goals. First, we do not routinely ask patients to identify recovery goals upon admission in a structured and standardized manner. In fact, clinicians often do not elicit patients' needs, concerns, and expectations regarding their care in general.[25] Second, even when recovery goals are elicited at admission, they may not be communicated effectively to all members of the care team. This could be due to geographically non‐localized teams (although we did not observe a statistically significant difference between regionalized MICU and nonregionalized oncology care units), frequent provider‐to‐provider handoffs, and siloed electronic communication (eg, email, alphanumeric pages) regarding goals of care that inevitably leaves out key providers.[26] Third, healthcare proxies who are involved in decision making on the patient's behalf may not always be available to meet with the care team in person; consequently, their input may not be considered in a timely manner or reliably communicated to all members of the care team. We observed a large discrepancy in how often patients chose be cured compared to their hospital providers. This could be explained by clinicians' unwillingness to disclose bad news or divulge accurate prognostic information that causes patients to feel depressed or lose hope, particularly for those patients with the worst prognoses.[16, 27, 28] Patients may lack sophisticated knowledge of their conditions for a variety of reasons, including low health literacy, at times choosing to hope for the best even when it is not realistic. Additionally, there may be more subtle differences in what patients and hospital providers consider the primary recovery goal in context of the main reason for hospitalization and underlying medical illness. For example, a patient with metastatic lung cancer hospitalized with recurrent postobstructive pneumonia may choose be cured as his/her primary recovery goal (thinking of the pneumonia), whereas physicians may choose improve/maintain health or comfort (thinking of the cancer). We also cannot exclude the possibility that sometimes when patients state be cured and clinicians state improve health as the primary goal, that they are really saying the same thing in different ways. However, these are 2 different constructs (cure may not be possible for many patients) that may deserve an explicit discussion for patients to have realistic expectations for their health following hospitalization.

In short, our results underscore the importance of having an open and honest dialog with patients and caregivers throughout hospitalization, and the need to provide education about the potential futility of excessive care in situations where appropriate. Simply following patients' goals without discussing their feasibility and the consequences of aggressive treatments may result in unnecessary morbidity and misuse of healthcare resources. Once goals are clearly established, communicated, and refined in hospitalized patients with serious illness, there is much reason to believe that ongoing conversation will favorably impact outcomes.[29]

We found few studies that rigorously quantified the rate of concordance of hospital recovery goals among patients and key hospital providers; however, studies that measured overall plan of care agreement have demonstrated suboptimal concordance.[20, 30, 31] Shin et al. found significant underestimation of cancer patients' needs and poor concordance between patients and oncologists in assessing perceived needs of supportive care.[20] It is also notable that nurses and physicians had low levels of concordance in our study. O'Leary and colleagues found that nurses and physicians did not reliably communicate and often did not agree on the plan of care for hospitalized patients.[30] Although geographic regionalization of care teams and multidisciplinary rounds can improve the likelihood that key members of the care team are on the same page with regard to the plan of care, there is still much room for improvement.[26, 32, 33, 34] For example, although nurses and physicians in our study independently selected individual recovery goals with similar frequencies (Table 2), we observed suboptimal concordance between nurses and providers (36.8%) for specific patients, including on our regionalized care unit (MICU). This may be due to the reasons described above.

There are several implications of these findings. As payors continue to shift payments toward value‐based metrics, largely determined by patient experience and adequate advance care planning,[9] our findings suggest that more effort should be focused on delivering care consistent with patients' primary recovery goals. As a first step, healthcare organizations can focus on efforts to systematically identify and communicate recovery goals to all members of the care team, ensuring that patients' preferences, needs, and values are captured. In addition, as innovation in patient engagement and care delivery using Web‐based and mobile technology continues to grow,[35] using these tools to capture key goals for hospitalization and recovery can play an essential role. For example, as electronic health record vendors and institutions start to implement patient portals in the acute care setting, they should consider how to configure these tools to capture key goals for hospitalization and recovery, and then communicate them to the care team; preliminary work in this area is promising.[10]

Our study has several limitations to generalizability. First, the study was conducted on 2 services (MICU and oncology) at a single institution using a sampling strategy where research assistants enrolled 2 to 3 patients per day. Although the sampling was random, the availability of patients and proxies to be interviewed may have led to selection bias. Second, the sample size was small. Third, the patients who participated were predominantly white, English‐speaking, and well educated, possibly a consequence of our sampling strategy. However, this fact makes our findings more striking; although cultural and language barriers were generally not present in our study population, large discrepancies in goal concordance still existed. Fourth, in instances when patients were unable to participate themselves, we interviewed their healthcare proxy; therefore, it is possible that the proxies' responses did not reflect those of the patient. However, we note that concordance rates did not significantly differ between the 2 services despite the fact that the proportion of proxy interviews was much higher in the MICU. Similarly, we cannot exclude the possibility that patients altered their stated goals in the presence of proxies, but patients were given the option to be interviewed alone. Patients may also have misunderstood the timing of the goals (during this hospitalization as opposed to long term), although research assistants made every effort to clarify this during the interviews. Finally, our data‐collection instrument was previously validated in hospitalized general medicine patients and not oncology or MICU patients, and it has not been used to directly ask clinicians to identify patients' recovery goals. However, there is no reason to suspect that it could not be used for this purpose in critical care as well as noncritical care settings, as the survey was developed by a multidisciplinary team that included medical professionals and was validated by clinicians who successfully identified a single, very broad goal (eg, be cured) in each case.

CONCLUSION

We report poor to slight concordance among hospitalized patients and key hospital providers with regard to the main recovery goal. Future studies should assess whether patient satisfaction and experience is adversely impacted by patient‐provider discordance regarding key recovery goals. Additionally, institutions may consider future efforts to elicit and communicate patients' primary recovery goals more effectively to all members of the care team, and address discrepancies as soon as they are discovered.

Disclosures

This work was supported by a grant from the Gordon and Betty Moore Foundation (GBMF) (grant GBMF3914). GBMF had no role in the design or conduct of the study; collection, analysis, or interpretation of data; or preparation or review of the manuscript. The authors report no conflicts of interest.

Patient‐centered care has been recognized by the Institute of Medicine as an essential aim of the US healthcare system.[1] A fundamental component of patient‐centered care is to engage patients and caregivers in establishing preferences, needs, values, and overall goals regarding their care.[1] Prior studies have shown that delivering high‐quality patient‐centered care is associated with improved health outcomes, and in some cases, reduced costs.[2, 3, 4, 5, 6, 7] Payors, including the Centers for Medicare and Medicaid Services under the Hospital Value‐Based Purchasing program, are increasingly tying payments to measures of patient experience.[8, 9] As more emphasis is placed on public reporting of these patient‐reported outcomes, healthcare organizations are investing in efforts to engage patients and caregivers, including efforts at establishing patients' preferences for care.[10]

In the acute care setting, a prerequisite for high‐quality patient‐centered care is identifying a patient's primary goal for recovery and then delivering care consistent with that goal.[11, 12, 13] Haberle et al. previously validated patients' most common goals for recovery in the hospital setting into 7 broad categories: (1) be cured, (2) live longer, (3) improve or maintain health, (4) be comfortable, (5) accomplish a particular life goal, (6) provide support for a family member, or (7) other.[13] When providers' understanding of these recovery goals are not concordant with the patient's stated goals, patients may receive care inconsistent with their preferences; it is not uncommon for patients to receive aggressive curative treatments (eg, cardiopulmonary resuscitation) when they have expressed otherwise.[14] On the other hand, when patient goals and priorities are clearly established, patients may have better outcomes.[15] For example, earlier conversations about patient goals and priorities in serious illness can lead to realistic expectations of treatment, enhanced goal‐concordant care, improved quality of life, higher patient satisfaction, more and earlier hospice care, fewer hospitalizations, better patient and family coping, reduced burden of decision making for families, and improved bereavement outcomes.[16, 17, 18]

Although previous studies have suggested poor patient‐physician concordance with regard to the patient's plan of care,[19, 20, 21, 22, 23, 24] there are limited data regarding providers' understanding of the patient's primary recovery goal during hospitalization. The purpose of this study was to identify the patients' Haberle goal, and then determine the degree of concordance among patients and key hospital providers regarding this goal.

METHODS

Study Setting

The Partners Human Research Committee approved the study. The study was conducted on an oncology and medical intensive care unit (MICU) at a major academic medical center in Boston, Massachusetts. The oncology unit was comprised of 2 non‐localized medical teams caring for patients admitted to that unit. The MICU was comprised of a single localized medical team. Medical teams working on these units consisted of a first responder (eg, intern or a physician assistant [PA]), medical residents, and an attending physician. Both units had dedicated nursing staff.

Study Participants

All adult patients (>17 years of age) admitted to the oncology and MICU units during the study period (November 2013 through May 2014) were eligible. These units were chosen because these patients are typically complex and have multiple medical comorbidities longer lengths of stay, and many procedures and tests. In addition, a standard method for asking patients to identify a primary recovery goal for hospitalization aligned well with ongoing institutional efforts to engage these patients in goals of care discussions.

Research assistants identified all patients admitted to each study unit for at least 48 hours and approached them in a random order with a daily target of 2 to 3 patients. Only patients who demonstrated capacity (determined by medical team), or had a legally designated healthcare proxy (who spoke English and was available to participate on their behalf) were included. Research assistants then approached the patient's nurse and a physician provider (defined for this study as housestaff physician, PA, or attending) from the primary medical team to participate in the interview (within 24 hours of patient's interview). We excluded eligible patients who did not have capacity or an available caregiver or declined to participate.

Data Collection Instrument and Interviews

Research assistants administered a validated questionnaire developed by Haberle et al. to participants after 48 hours into the patient's admission to provide time to establish mutual understanding of the diagnosis and prognosis.[13] We asked patients (or the designated healthcare proxy) to select their single, most important Haberle goal (see above). Specifically, as in the original validation study,[13] patients or proxies were asked the following question: Please tell me your most important goal of care for this hospitalization. If they did not understand this question, we asked a follow‐up question: What are you expecting will be accomplished during this hospitalization? Within 24 hours of the patient/proxy interview, we independently asked the patient's nurse and physician to select what they thought was the patient's most important goal for recovery using the same questionnaire, adapted for providers. In each case, all participants were blinded to the responses of others.

Measures

We measured the frequency that each participant (patient/proxy, nurse, and physician) selected a specific Haberle recovery goal across all patients. We measured the rate of pairwise concordance by recovery goal for each participant dyad (patient/proxy‐nurse, patient/proxy‐physician, and nurse‐physician). Finally, we calculated the frequency of cases for which all 3 participants selected the same recovery goal.

Statistical Analyses

Descriptive statistics were used to report patient demographic data. The frequencies of selected responses were calculated and reported as percentages for each type of participant. The differences in rate of responses for each Haberle goal were compared across each participant group using 2 analysis. We then performed 2‐way Kappa statistical tests to measure inter‐rater agreement for each dyad.

RESULTS

Of 1436 patients (882 oncology, 554 MICU) hospitalized during the study period, 341(156 oncology, 185 MICU) were admitted for <48 hours. Of 914 potentially eligible patients (617 oncology, 297 MICU), 191 (112 oncology and 79 MICU) were approached to participate based on our sampling strategy; of these, 8 (2 oncology and 6 MICU) did not have capacity (and no proxy was available) and 2 (1 oncology and 1 MICU) declined. Of the remaining 181 patients (109 oncology and 72 MICU), we obtained a completed questionnaire from all 3 interviewees on 109 (60.2% response rate).

Of the 109 study patients, 52 (47.7%) and 57 (52.3%) were admitted to the oncology and medical intensive care units, respectively (Table 1). Patients were predominantly middle aged, Caucasian, English‐speaking, and college‐educated. Healthcare proxies were frequently interviewed on behalf of patients in the MICU. Housestaff physicians were more often interviewed in the MICU, and PAs were interviewed only on oncology units. Compared to patient responders, nonresponders tended to be male and were admitted to oncology units (see Supporting Table 1 in the online version of this article).

Patient Characteristics
Characteristics All Patients Admitted to Medical Intensive Care Units Admitted to Oncology Units
  • NOTE: Abbreviations: SD, standard deviation. *Patients were interviewed as part of a unique patient admission.

Total, no. (%) 109 (100%) 57 (52.3%) 52 (47.7%)
Gender, no. (%)
Male 55 (50.5%) 28 (49.1%) 26 (50.0%)
Female 54 (49.5%) 29 (50.9%) 26 (50.0%)
Age, y, mean SD 59.4 14 59.7 15 59.1 13
Median 61 61 60
Range 2188 2188 2285
Race, no. (%)
White 103 (94.5%) 53 (93.0%) 50 (96.2%)
Other 6 (5.5%) 4 (7.0%) 2 (3.8%)
Language, no. (%)
English 106 (97.2%) 56 (98.1%) 50 (96.2%)
Other 3 (2.8%) 1 (1.9%) 2 (3.8%)
Education level, no. (%)
Less than high school 30 (27.5%) 17 (29.8%) 13 (25.0%)
High school diploma 27 (24.5%) 18 (31.6%) 9 (17.3%)
Some college or beyond 52 (47.7%) 22 (38.6%) 30 (57.7%)
Patient or caregiver interviewed, no. (%)
Patient 68 (62.4%) 27 (47.4%) 48 (92.3%)
Caregiver 41 (37.6%) 30 (52.6%) 4 (7.7%)
Nurse interviewed, no. (unique) 109 (75) 57 (42) 52 (33)
Physician provider interviewed, no. (%); no. unique
Attending 27 (24.8%); 20 15 (26.3%); 10 12 (23.1%); 10
Housestaff 48 (44.0%); 39 42 (73.7%); 33 6 (11.5%); 6
Physician assistant 34 (31.2%); 25 0 (0%); 0 34 (65.4%); 25

The frequencies of selected Haberle recovery goals by participant type across all patients are listed in Table 2. Patients (or proxies) most often selected be cured (46.8%). Assigned nurses and physicians more commonly selected improve or maintain health (38.5% and 46.8%, respectively). Be comfortable was selected by nurses and physicians more frequently than by patients (16.5%, 16.5%, and 8.3%, respectively). The rate of responses for each Haberle goal was significantly different across all respondent groups (P < 0.0001). The frequencies of selected Haberle goals were not significantly different between patients or proxies (P = 0.67), or for patients admitted to the MICU compared to oncology units (P = 0.64).

Primary Recovery Goal Reported by Patient, Physician Provider, and Nurse
Haberle Recovery Goal Patient/Caregiver, no. (%), n = 109 Physician Provider, no. (%), n = 109* Nurse, no. (%), n = 109
  • NOTE: *Physician provider is defined as either a housestaff physician, physician assistant, or attending physician for the purposes of this study.

Be cured 51 (46.8%) 20 (18.3%) 20 (18.3%)
Be comfortable 9 (8.3%) 18 (16.5%) 18 (16.5%)
Improve or maintain health 32 (29.4%) 42 (38.5%) 51 (46.8%)
Live longer 14 (12.8%) 21 (19.3%) 12 (11%)
Accomplish personal goal 2 (1.8%) 0 (0%) 3 (2.8%)
Provide support for family 1 (0.9%) 1 (0.9%) 1 (0.9%)
Other 0 (0%) 7 (6.4%) 4 (3.7%)

Inter‐rater agreement was poor to slight for the 3 participant dyads (kappa 0.09 [0.03‐0.19], 0.19 [0.08‐0.30], and 0.20 [0.08‐0.32] for patient‐physician, patient‐nurse, and nurse‐physician, respectively). The 3 participants selected the identical recovery goal in 22 (20.2%) cases, and each selected a distinct recovery goal in 32 (29.4%) cases. Pairwise concordance between nurses and physicians was 39.4%. There were no significant differences in agreement between patients admitted to the MICU compared to oncology units (P = 0.09).

DISCUSSION

We observed poor to slight concordance among patients and key hospital providers with regard to identifying the patient's primary recovery goal during acute hospitalization. The majority of patients (or proxies), chose be cured, whereas the majority of hospital providers chose improve or maintain health. Patients were twice as likely to select be cured and half as likely to choose be comfortable compared to nurses or physicians. Strikingly, the patient (or proxy), nurse, and physician identified the same recovery goal in just 20% of cases. These findings were similar for patients admitted to either the MICU or oncology units or when healthcare proxies participated on behalf of the patient (eg, when incapacitated in the MICU).

There are many reasons why hospital providers may not correctly identify the patients' primary recovery goals. First, we do not routinely ask patients to identify recovery goals upon admission in a structured and standardized manner. In fact, clinicians often do not elicit patients' needs, concerns, and expectations regarding their care in general.[25] Second, even when recovery goals are elicited at admission, they may not be communicated effectively to all members of the care team. This could be due to geographically non‐localized teams (although we did not observe a statistically significant difference between regionalized MICU and nonregionalized oncology care units), frequent provider‐to‐provider handoffs, and siloed electronic communication (eg, email, alphanumeric pages) regarding goals of care that inevitably leaves out key providers.[26] Third, healthcare proxies who are involved in decision making on the patient's behalf may not always be available to meet with the care team in person; consequently, their input may not be considered in a timely manner or reliably communicated to all members of the care team. We observed a large discrepancy in how often patients chose be cured compared to their hospital providers. This could be explained by clinicians' unwillingness to disclose bad news or divulge accurate prognostic information that causes patients to feel depressed or lose hope, particularly for those patients with the worst prognoses.[16, 27, 28] Patients may lack sophisticated knowledge of their conditions for a variety of reasons, including low health literacy, at times choosing to hope for the best even when it is not realistic. Additionally, there may be more subtle differences in what patients and hospital providers consider the primary recovery goal in context of the main reason for hospitalization and underlying medical illness. For example, a patient with metastatic lung cancer hospitalized with recurrent postobstructive pneumonia may choose be cured as his/her primary recovery goal (thinking of the pneumonia), whereas physicians may choose improve/maintain health or comfort (thinking of the cancer). We also cannot exclude the possibility that sometimes when patients state be cured and clinicians state improve health as the primary goal, that they are really saying the same thing in different ways. However, these are 2 different constructs (cure may not be possible for many patients) that may deserve an explicit discussion for patients to have realistic expectations for their health following hospitalization.

In short, our results underscore the importance of having an open and honest dialog with patients and caregivers throughout hospitalization, and the need to provide education about the potential futility of excessive care in situations where appropriate. Simply following patients' goals without discussing their feasibility and the consequences of aggressive treatments may result in unnecessary morbidity and misuse of healthcare resources. Once goals are clearly established, communicated, and refined in hospitalized patients with serious illness, there is much reason to believe that ongoing conversation will favorably impact outcomes.[29]

We found few studies that rigorously quantified the rate of concordance of hospital recovery goals among patients and key hospital providers; however, studies that measured overall plan of care agreement have demonstrated suboptimal concordance.[20, 30, 31] Shin et al. found significant underestimation of cancer patients' needs and poor concordance between patients and oncologists in assessing perceived needs of supportive care.[20] It is also notable that nurses and physicians had low levels of concordance in our study. O'Leary and colleagues found that nurses and physicians did not reliably communicate and often did not agree on the plan of care for hospitalized patients.[30] Although geographic regionalization of care teams and multidisciplinary rounds can improve the likelihood that key members of the care team are on the same page with regard to the plan of care, there is still much room for improvement.[26, 32, 33, 34] For example, although nurses and physicians in our study independently selected individual recovery goals with similar frequencies (Table 2), we observed suboptimal concordance between nurses and providers (36.8%) for specific patients, including on our regionalized care unit (MICU). This may be due to the reasons described above.

There are several implications of these findings. As payors continue to shift payments toward value‐based metrics, largely determined by patient experience and adequate advance care planning,[9] our findings suggest that more effort should be focused on delivering care consistent with patients' primary recovery goals. As a first step, healthcare organizations can focus on efforts to systematically identify and communicate recovery goals to all members of the care team, ensuring that patients' preferences, needs, and values are captured. In addition, as innovation in patient engagement and care delivery using Web‐based and mobile technology continues to grow,[35] using these tools to capture key goals for hospitalization and recovery can play an essential role. For example, as electronic health record vendors and institutions start to implement patient portals in the acute care setting, they should consider how to configure these tools to capture key goals for hospitalization and recovery, and then communicate them to the care team; preliminary work in this area is promising.[10]

Our study has several limitations to generalizability. First, the study was conducted on 2 services (MICU and oncology) at a single institution using a sampling strategy where research assistants enrolled 2 to 3 patients per day. Although the sampling was random, the availability of patients and proxies to be interviewed may have led to selection bias. Second, the sample size was small. Third, the patients who participated were predominantly white, English‐speaking, and well educated, possibly a consequence of our sampling strategy. However, this fact makes our findings more striking; although cultural and language barriers were generally not present in our study population, large discrepancies in goal concordance still existed. Fourth, in instances when patients were unable to participate themselves, we interviewed their healthcare proxy; therefore, it is possible that the proxies' responses did not reflect those of the patient. However, we note that concordance rates did not significantly differ between the 2 services despite the fact that the proportion of proxy interviews was much higher in the MICU. Similarly, we cannot exclude the possibility that patients altered their stated goals in the presence of proxies, but patients were given the option to be interviewed alone. Patients may also have misunderstood the timing of the goals (during this hospitalization as opposed to long term), although research assistants made every effort to clarify this during the interviews. Finally, our data‐collection instrument was previously validated in hospitalized general medicine patients and not oncology or MICU patients, and it has not been used to directly ask clinicians to identify patients' recovery goals. However, there is no reason to suspect that it could not be used for this purpose in critical care as well as noncritical care settings, as the survey was developed by a multidisciplinary team that included medical professionals and was validated by clinicians who successfully identified a single, very broad goal (eg, be cured) in each case.

CONCLUSION

We report poor to slight concordance among hospitalized patients and key hospital providers with regard to the main recovery goal. Future studies should assess whether patient satisfaction and experience is adversely impacted by patient‐provider discordance regarding key recovery goals. Additionally, institutions may consider future efforts to elicit and communicate patients' primary recovery goals more effectively to all members of the care team, and address discrepancies as soon as they are discovered.

Disclosures

This work was supported by a grant from the Gordon and Betty Moore Foundation (GBMF) (grant GBMF3914). GBMF had no role in the design or conduct of the study; collection, analysis, or interpretation of data; or preparation or review of the manuscript. The authors report no conflicts of interest.

References
  1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy of Sciences; 2001.
  2. Bartlett EE, Grayson M, Barker R, Levine DM, Golden A, Libber S. The effects of physician communications skills on patient satisfaction; recall, and adherence. J Chronic Dis. 1984;37(9–10):755764.
  3. Little P, Everitt H, Williamson I, et al. Observational study of effect of patient centredness and positive approach on outcomes of general practice consultations. BMJ. 2001;323(7318):908911.
  4. Clancy CM. Reengineering hospital discharge: a protocol to improve patient safety, reduce costs, and boost patient satisfaction. Am J Med Qual. 2009;24(4):344346.
  5. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):4148.
  6. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):19211931.
  7. Veroff D, Marr A, Wennberg DE. Enhanced support for shared decision making reduced costs of care for patients with preference‐sensitive conditions. Health Aff (Millwood). 2013;32(2):285293.
  8. Centers for Medicare and Medicaid Services. Medicare program; hospital inpatient value‐based purchasing program. Final rule. Fed Regist. 2011;76(88):2649026547.
  9. Centers for Medicare and Medicaid Services. CMS begins implementation of key payment legislation. Available at: https://www.cms.gov/Newsroom/MediaReleaseDatabase/Press‐releases/2015‐Press‐releases‐items/2015‐07‐08.html. Published July 8, 2015.
  10. Dalal AK, Dykes PC, Collins S, et al. A web‐based, patient‐centered toolkit to engage patients and caregivers in the acute care setting: a preliminary evaluation [published online August 2, 2015]. J Am Med Informatics Assoc. doi: 10.1093/jamia/ocv093.
  11. Daly BJ, Douglas SL, O'Toole E, et al. Effectiveness trial of an intensive communication structure for families of long‐stay ICU patients. Chest. 2010;138(6):13401348.
  12. Brandt DS, Shinkunas LA, Gehlbach TG, Kaldjian LC. Understanding goals of care statements and preferences among patients and their surrogates in the medical ICU. J Hosp Palliat Nurs. 2012;14(2):126132.
  13. Haberle TH, Shinkunas LA, Erekson ZD, Kaldjian LC. Goals of care among hospitalized patients: a validation study. Am J Hosp Palliat Care. 2011;28(5):335341.
  14. Goodlin SJ, Zhong Z, Lynn J, et al. Factors associated with use of cardiopulmonary resuscitation in seriously ill hospitalized adults. JAMA. 1999;282(24):23332339.
  15. Mack JW, Weeks JC, Wright AA, Block SD, Prigerson HG. End‐of‐life discussions, goal attainment, and distress at the end of life: Predictors and outcomes of receipt of care consistent with preferences. J Clin Oncol. 2010;28(7):12031208.
  16. Mack JW, Smith TJ. Reasons why physicians do not have discussions about poor prognosis, why it matters, and what can be improved. J Clin Oncol. 2012;30(22):27152717.
  17. Wright AA, Zhang B, Ray A, et al. Associations between end‐of‐life discussions, patient mental health, medical care near death, and caregiver bereavement adjustment. JAMA. 2008;300(14):16651673.
  18. Chiarchiaro J, Buddadhumaruk P, Arnold RM, White DB. Prior advance care planning is associated with less decisional conflict among surrogates for critically ill patients. Ann Am Thorac Soc. 2015;12(10):15281533.
  19. O'Leary KJ, Kulkarni N, Landler MP, et al. Hospitalized patients' understanding of their plan of care. Mayo Clin Proc. 2010;85(1):4752.
  20. Shin DW, Kim SY, Cho J, et al. Discordance in perceived needs between patients and physicians in oncology practice: a nationwide survey in Korea. J Clin Oncol. 2011;29(33):44244429.
  21. Dykes PC, DaDamio RR, Goldsmith D, Kim H, Ohashi K, Saba VK. Leveraging standards to support patient‐centric interdisciplinary plans of care. AMIA Annu Symp Proc. 2011;2011:356363.
  22. Desalvo KB, Muntner P. Discordance between physician and patient self‐rated health and all‐cause mortality. Ochsner J. 2011;11(3):232240.
  23. Yen JC, Abrahamowicz M, Dobkin PL, Clarke AE, Battista RN, Fortin PR. Determinants of discordance between patients and physicians in their assessment of lupus disease activity. J Rheumatol. 2003;30(9):19671976.
  24. Rothe A, Bielitzer M, Meinertz T, Limbourg T, Ladwig KH, Goette A. Predictors of discordance between physicians' and patients' appraisals of health‐related quality of life in atrial fibrillation patients: Findings from the Angiotensin II Antagonist in Paroxysmal Atrial Fibrillation Trial. Am Heart J. 2013;166(3):589596.
  25. Rozenblum R, Lisby M, Hockey PM, et al. Uncovering the blind spot of patient satisfaction: an international survey. BMJ Qual Saf. 2011;20(11):959965.
  26. O'Leary KJ, Haviley C, Slade ME, Shah HM, Lee J, Williams M. Improving teamwork: impact of structured interdisciplinary rounds on a hospitalist unit. J Hosp Med. 2011;6(2):8893.
  27. Lee SJ, Fairclough D, Antin JH, Weeks JC. Discrepancies between patient and physician estimates for the success of stem cell transplantation. JAMA. 2001;285(8):10341038.
  28. Lee SJ, Loberiza FR, Rizzo JD, Soiffer RJ, Antin JH, Weeks JC. Optimistic expectations and survival after hematopoietic stem cell transplantation. Biol Blood Marrow Transplant. 2003;9(6):389396.
  29. Bernacki R, Hutchings M, Vick J, et al. Development of the Serious Illness Care Program: a randomised controlled trial of a palliative care communication intervention. BMJ Open. 2015;5(10):e009032.
  30. O'Leary KJ, Thompson JA, Landler MP, et al. Patterns of nurse‐physician communication and agreement on the plan of care. Qual Saf Health Care. 2010;19(3):195199.
  31. O'Leary KJ, Wayne DB, Landler MP, et al. Impact of localizing physicians to hospital units on nurse‐physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  32. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
  33. Stein J, Payne C, Methvin A, et al. Reorganizing a hospital ward as an accountable care unit. J Hosp Med. 2015;10(1):3640.
  34. O'Leary KJ, Sehgal NL, Terrell G, Williams M. Interdisciplinary teamwork in hospitals: a review and practical recommendations for improvement. J Hosp Med. 2012;7(1):4854.
  35. Sama PR, Eapen ZJ, Weinfurt KP, Shah BR, Schulman KA. An evaluation of mobile health application tools. JMIR mHealth uHealth. 2014;2(2):e19.
References
  1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy of Sciences; 2001.
  2. Bartlett EE, Grayson M, Barker R, Levine DM, Golden A, Libber S. The effects of physician communications skills on patient satisfaction; recall, and adherence. J Chronic Dis. 1984;37(9–10):755764.
  3. Little P, Everitt H, Williamson I, et al. Observational study of effect of patient centredness and positive approach on outcomes of general practice consultations. BMJ. 2001;323(7318):908911.
  4. Clancy CM. Reengineering hospital discharge: a protocol to improve patient safety, reduce costs, and boost patient satisfaction. Am J Med Qual. 2009;24(4):344346.
  5. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):4148.
  6. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):19211931.
  7. Veroff D, Marr A, Wennberg DE. Enhanced support for shared decision making reduced costs of care for patients with preference‐sensitive conditions. Health Aff (Millwood). 2013;32(2):285293.
  8. Centers for Medicare and Medicaid Services. Medicare program; hospital inpatient value‐based purchasing program. Final rule. Fed Regist. 2011;76(88):2649026547.
  9. Centers for Medicare and Medicaid Services. CMS begins implementation of key payment legislation. Available at: https://www.cms.gov/Newsroom/MediaReleaseDatabase/Press‐releases/2015‐Press‐releases‐items/2015‐07‐08.html. Published July 8, 2015.
  10. Dalal AK, Dykes PC, Collins S, et al. A web‐based, patient‐centered toolkit to engage patients and caregivers in the acute care setting: a preliminary evaluation [published online August 2, 2015]. J Am Med Informatics Assoc. doi: 10.1093/jamia/ocv093.
  11. Daly BJ, Douglas SL, O'Toole E, et al. Effectiveness trial of an intensive communication structure for families of long‐stay ICU patients. Chest. 2010;138(6):13401348.
  12. Brandt DS, Shinkunas LA, Gehlbach TG, Kaldjian LC. Understanding goals of care statements and preferences among patients and their surrogates in the medical ICU. J Hosp Palliat Nurs. 2012;14(2):126132.
  13. Haberle TH, Shinkunas LA, Erekson ZD, Kaldjian LC. Goals of care among hospitalized patients: a validation study. Am J Hosp Palliat Care. 2011;28(5):335341.
  14. Goodlin SJ, Zhong Z, Lynn J, et al. Factors associated with use of cardiopulmonary resuscitation in seriously ill hospitalized adults. JAMA. 1999;282(24):23332339.
  15. Mack JW, Weeks JC, Wright AA, Block SD, Prigerson HG. End‐of‐life discussions, goal attainment, and distress at the end of life: Predictors and outcomes of receipt of care consistent with preferences. J Clin Oncol. 2010;28(7):12031208.
  16. Mack JW, Smith TJ. Reasons why physicians do not have discussions about poor prognosis, why it matters, and what can be improved. J Clin Oncol. 2012;30(22):27152717.
  17. Wright AA, Zhang B, Ray A, et al. Associations between end‐of‐life discussions, patient mental health, medical care near death, and caregiver bereavement adjustment. JAMA. 2008;300(14):16651673.
  18. Chiarchiaro J, Buddadhumaruk P, Arnold RM, White DB. Prior advance care planning is associated with less decisional conflict among surrogates for critically ill patients. Ann Am Thorac Soc. 2015;12(10):15281533.
  19. O'Leary KJ, Kulkarni N, Landler MP, et al. Hospitalized patients' understanding of their plan of care. Mayo Clin Proc. 2010;85(1):4752.
  20. Shin DW, Kim SY, Cho J, et al. Discordance in perceived needs between patients and physicians in oncology practice: a nationwide survey in Korea. J Clin Oncol. 2011;29(33):44244429.
  21. Dykes PC, DaDamio RR, Goldsmith D, Kim H, Ohashi K, Saba VK. Leveraging standards to support patient‐centric interdisciplinary plans of care. AMIA Annu Symp Proc. 2011;2011:356363.
  22. Desalvo KB, Muntner P. Discordance between physician and patient self‐rated health and all‐cause mortality. Ochsner J. 2011;11(3):232240.
  23. Yen JC, Abrahamowicz M, Dobkin PL, Clarke AE, Battista RN, Fortin PR. Determinants of discordance between patients and physicians in their assessment of lupus disease activity. J Rheumatol. 2003;30(9):19671976.
  24. Rothe A, Bielitzer M, Meinertz T, Limbourg T, Ladwig KH, Goette A. Predictors of discordance between physicians' and patients' appraisals of health‐related quality of life in atrial fibrillation patients: Findings from the Angiotensin II Antagonist in Paroxysmal Atrial Fibrillation Trial. Am Heart J. 2013;166(3):589596.
  25. Rozenblum R, Lisby M, Hockey PM, et al. Uncovering the blind spot of patient satisfaction: an international survey. BMJ Qual Saf. 2011;20(11):959965.
  26. O'Leary KJ, Haviley C, Slade ME, Shah HM, Lee J, Williams M. Improving teamwork: impact of structured interdisciplinary rounds on a hospitalist unit. J Hosp Med. 2011;6(2):8893.
  27. Lee SJ, Fairclough D, Antin JH, Weeks JC. Discrepancies between patient and physician estimates for the success of stem cell transplantation. JAMA. 2001;285(8):10341038.
  28. Lee SJ, Loberiza FR, Rizzo JD, Soiffer RJ, Antin JH, Weeks JC. Optimistic expectations and survival after hematopoietic stem cell transplantation. Biol Blood Marrow Transplant. 2003;9(6):389396.
  29. Bernacki R, Hutchings M, Vick J, et al. Development of the Serious Illness Care Program: a randomised controlled trial of a palliative care communication intervention. BMJ Open. 2015;5(10):e009032.
  30. O'Leary KJ, Thompson JA, Landler MP, et al. Patterns of nurse‐physician communication and agreement on the plan of care. Qual Saf Health Care. 2010;19(3):195199.
  31. O'Leary KJ, Wayne DB, Landler MP, et al. Impact of localizing physicians to hospital units on nurse‐physician communication and agreement on the plan of care. J Gen Intern Med. 2009;24(11):12231227.
  32. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
  33. Stein J, Payne C, Methvin A, et al. Reorganizing a hospital ward as an accountable care unit. J Hosp Med. 2015;10(1):3640.
  34. O'Leary KJ, Sehgal NL, Terrell G, Williams M. Interdisciplinary teamwork in hospitals: a review and practical recommendations for improvement. J Hosp Med. 2012;7(1):4854.
  35. Sama PR, Eapen ZJ, Weinfurt KP, Shah BR, Schulman KA. An evaluation of mobile health application tools. JMIR mHealth uHealth. 2014;2(2):e19.
Issue
Journal of Hospital Medicine - 11(9)
Issue
Journal of Hospital Medicine - 11(9)
Page Number
615-619
Page Number
615-619
Article Type
Display Headline
How often are hospitalized patients and providers on the same page with regard to the patient's primary recovery goal for hospitalization?
Display Headline
How often are hospitalized patients and providers on the same page with regard to the patient's primary recovery goal for hospitalization?
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Jose F. Figueroa, MD, Instructor of Medicine, Harvard Medical School, Associate Physician, Brigham Fax: 844‐861‐3477; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Discharge Preparedness and Readmission

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Preparedness for hospital discharge and prediction of readmission

In recent years, US hospitals have focused on decreasing readmission rates, incented by reimbursement penalties to hospitals having excessive readmissions.[1] Gaps in the quality of care provided during transitions likely contribute to preventable readmissions.[2] One compelling quality assessment in this setting is measuring patients' discharge preparedness, using key dimensions such as understanding their instructions for medication use and follow‐up. Patient‐reported preparedness for discharge may also be useful to identify risk of readmission.

Several patient‐reported measures of preparedness for discharge exist, and herein we describe 2 measures of interest. First, the Brief‐PREPARED (B‐PREPARED) measure was derived from the longer PREPARED instrument (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services), which reflects the patient's perceived needs at discharge. In previous research, the B‐PREPARED measure predicted emergency department (ED) visits for patients who had been recently hospitalized and had a high risk for readmission.[3] Second, the Care Transitions Measure‐3 (CTM‐3) was developed by Coleman et al. as a patient‐reported measure to discriminate between patients who were more likely to have an ED visit or readmission from those who did not. CTM‐3 has also been used to evaluate hospitals' level of care coordination and for public reporting purposes.[4, 5, 6] It has been endorsed by the National Quality Forum and incorporated into the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey provided to samples of recently hospitalized US patients.[7] However, recent evidence from an inpatient cohort of cardiovascular patients suggests the CTM‐3 overinflates care transition scores compared to the longer 15‐item CTM. In that cohort, the CTM‐3 could not differentiate between patients who did or did not have repeat ED visits or readmission.[8] Thus far, the B‐PREPARED and CTM‐3 measures have not been compared to one another directly.

In addition to the development of patient‐reported measures, hospitals increasingly employ administrative algorithms to predict likelihood of readmission.[9] A commonly used measure is the LACE index (Length of stay, Acuity, Comorbidity, and Emergency department use).[10] The LACE index predicted readmission and death within 30 days of discharge in a large cohort in Canada. In 2 retrospective studies of recently hospitalized patients in the United States, the LACE index's ability to discriminate between patients readmitted or not ranged from slightly better than chance to moderate (C statistic 0.56‐0.77).[11, 12]

It is unknown whether adding patient‐reported preparedness measures to commonly used readmission prediction scores increases the ability to predict readmission risk. We sought to determine whether the B‐PREPARED and CTM‐3 measures were predictive of readmission or death, as compared to the LACE index, in a large cohort of cardiovascular patients. In addition, we sought to determine the additional predictive and discriminative ability gained from administering the B‐PREPARED and CTM‐3 measures, while adjusting for the LACE index and other clinical factors. We hypothesized that: (1) higher preparedness scores on both measures would predict lower risk of readmission or death in a cohort of patients hospitalized with cardiac diagnoses; and (2) because it provides more specific and actionable information, the B‐PREPARED would discriminate readmission more accurately than CTM‐3, after controlling for clinical factors.

METHODS

Study Setting and Design

The Vanderbilt Inpatient Cohort Study (VICS) is a prospective study of patients admitted with cardiovascular disease to Vanderbilt University Hospital. The purpose of VICS is to investigate the impact of patient and social factors on postdischarge health outcomes such as quality of life, unplanned hospital utilization, and mortality. The rationale and design of VICS are detailed elsewhere.[13] Briefly, participants completed a baseline interview while hospitalized, and follow‐up phone calls were conducted within 2 to 9 days and at approximately 30 and 90 days postdischarge. During the first follow‐up call conducted by research assistants, we collected preparedness for discharge data utilizing the 2 measures described below. After the 90‐day phone call, we collected healthcare utilization since the index admission. The study was approved by the Vanderbilt University Institutional Review Board.

Patients

Eligibility screening shortly after admission identified patients with acute decompensated heart failure (ADHF) and/or an intermediate or high likelihood of acute coronary syndrome (ACS) per a physician's review of the clinical record. Exclusion criteria included: age <18 years, non‐English speaker, unstable psychiatric illness, delirium, low likelihood of follow‐up (eg, no reliable telephone number), on hospice, or otherwise too ill to complete an interview. To be included in these analyses, patients must have completed the preparedness for discharge measurements during the first follow‐up call. Patients who died before discharge or before completing the follow‐up call were excluded.

Preparedness for Discharge Measures (Patient‐Reported Data)

Preparedness for discharge was assessed using the 11‐item B‐PREPARED and the 3‐item CTM‐3.

The B‐PREPARED measures how prepared patients felt leaving the hospital with regard to: self‐care information for medications and activity, equipment/community services needed, and confidence in managing one's health after hospitalization. The B‐PREPARED measure has good internal consistency reliability (Cronbach's = 0.76) and has been validated in patients of varying age within a week of discharge. Preparedness is the sum of responses to all 11 questions, with a range of 0 to 22. Higher scores reflect increased preparedness for discharge.[3]

The CTM‐3 asks patients to rate how well their preferences were considered regarding transitional needs, as well as their understanding of postdischarge self‐management and the purpose of their medications, each on a 4‐point response scale (strongly disagree to strongly agree). The sum of the 3 responses quantifies the patient's perception of the quality of the care transition at discharge (Cronbach's = 0.86,[14] 0.92 in a cohort similar to ours[8]). Scores range from 3 to 12, with higher score indicating more preparedness. Then, the sum is transformed to a 0 to 100 scale.[15]

Clinical Readmission Risk Measures (Medical Record Data)

The LACE index, published by Van Walraven et al.,[10] takes into account 4 categories of clinical data: length of hospital stay, acuity of event, comorbidities, and ED visits in the prior 6 months. More specifically, a diagnostic code‐based, modified version of the Charlson Comorbidity Index was used to calculate the comorbidity score. These clinical criteria were obtained from an administrative database and weighted according to the methods used by Van Walraven et al. An overall score was calculated on a scale of 0 to 19, with higher scores indicating higher risk of readmission or death within 30 days.

From medical records, we also collected patients' demographic data including age, race, and gender, and diagnosis of ACS, ADHF, or both at hospital admission.

Outcome Measures

Healthcare utilization data were obtained from the index hospital as well as outside facilities. The electronic medical records from Vanderbilt University Hospital provided information about healthcare utilization at Vanderbilt 90 days after initial discharge. We also used Vanderbilt records to see if patients were transferred to Vanderbilt from other hospitals or if patients visited other hospitals before or after enrollment. We supplemented this with patient self‐report during the follow‐up telephone calls (at 30 and 90 days after initial discharge) so that any additional ED and hospital visits could be captured. Mortality data were collected from medical records, Social Security data, and family reports. The main outcome was time to first unplanned hospital readmission or death within 30 and 90 days of discharge.

Analysis

To describe our sample, we summarized categorical variables with percentages and continuous variables with percentiles. To test for evidence of unadjusted covariate‐outcome relationships, we used Pearson 2 and Wilcoxon rank sum tests for categorical and continuous covariates, respectively.

For the primary analyses we used Cox proportional hazard models to examine the independent associations between the prespecified predictors for patient‐reported preparedness and time to first unplanned readmission or death within 30 and 90 days of discharge. For each outcome (30‐ and 90‐day readmission or death), we fit marginal models separately for each of the B‐PREPARED, CTM‐3, and LACE scores. We then fit multivariable models that used both preparedness measures as well as age, gender, race, and diagnosis (ADHF and/or ACS), variables available to clinicians when patients are admitted. When fitting the multivariable models, we did not find strong evidence of nonlinear effects; therefore, only linear effects are reported. To facilitate comparison of effects, we scaled continuous variables by their interquartile range (IQR). The associated, exponentiated regression parameter estimates may therefore be interpreted as hazard ratios for readmission or death per IQR change in each predictor. In addition to parameter estimation, we computed the C index to evaluate capacity for the model to discriminate those who were and were not readmitted or died. All analyses were conducted in R version 3.1.2 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

From the cohort of 1239 patients (Figure 1), 64%, 28%, and 7% of patients were hospitalized with ACS, ADHF, or both, respectively (Table 1). Nearly 45% of patients were female, 83% were white, and the median age was 61 years (IQR 5269). The median length of stay was 3 days (IQR 25). The median preparedness scores were high for both B‐PREPARED (21, IQR 1822) and CTM‐3 (77.8, IQR 66.7100). A total of 211 (17%) and 380 (31%) were readmitted or died within 30 and 90 days, respectively. The completion rate for the postdischarge phone calls was 88%.

Patient Characteristics
Death or Readmission Within 30 Days Death or Readmission Within 90 Days
Not Readmitted, N = 1028 Death/Readmitted, N = 211 P Value Not Readmitted, N = 859 Death/Readmitted, N = 380 P Value
  • NOTE: Continuous variables: summarize with the 5th:25th:50th:75th:95th. Categorical variables: summarize with the percentage and (N). Abbreviations: ACS, acute coronary syndromes; ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services) CTM‐3, Care Transitions Measure‐3; LACE, Length of hospital stay, Acuity of event, Comorbidities, and ED visits in the prior 6 months; LOS, length of stay. *Pearson test. Wilcoxon test.

Gender, male 55.8% (574) 53.1% (112) 0.463* 56.3% (484) 53.2% (202) 0.298*
Female 44.2% (454) 46.9% (99) 43.7% (375) 46.8% (178)
Race, white 83.9% (860) 80.6% (170) 0.237* 86.0% (737) 77.3% (293) <0.001*
Race, nonwhite 16.1% (165) 19.4% (41) 14.0% (120) 22.7% (86)
Diagnosis ACS 68.0% (699) 46.4% (98) <0.001* 72.9% (626) 45.0% (171) <0.001*
ADHF 24.8% (255) 46.0% (97) 20.3% (174) 46.8% (178)
Both 7.2% (74) 7.6% (16) 6.9% (59) 8.2% (31)
Age 39.4:52:61:68:80 37.5:53.5:62:70:82 0.301 40:52:61:68:80 38:52:61 :70:82 0.651
LOS 1:2:3:5:10 1:3: 4:7.5:17 <0.001 1:2:3:5:9 1:3:4:7:15 <0.001
CTM‐3 55.6:66.7: 77.8:100:100 55.6:66.7:77.8:100 :100 0.305 55.6:66.7:88.9:100:100 55.6:66.7:77.8:100 :100 0.080
B‐PREPARED 12:18:21:22.:22 10:17:20:22:22 0.066 12:18:21:22:22 10:17:20 :22:22 0.030
LACE 1:4: 7:10 :14 3.5:7:10:13:17 <0.001 1:4:6: 9:14 3:7:10:13:16 <0.001
Figure 1
Study flow diagram. Abbreviations: ACS, acute coronary syndrome; ADHF, acute decompensated heart failure; VICS, Vanderbilt Inpatient Cohort Study.

B‐PREPARED and CTM‐3 were moderately correlated with one another (Spearman's = 0.40, P < 0.001). In bivariate analyses (Table 1), the association between B‐PREPARED and readmission or death was significant at 90 days (P = 0.030) but not 30 days. The CTM‐3 showed no significant association with readmission or death at either time point. The LACE score was significantly associated with rates of readmission at 30 and 90 days (P < 0.001).

Outcomes Within 30 Days of Discharge

When examining readmission or death within 30 days of discharge, simple unadjusted models 2 and 3 showed that the B‐PREPARED and LACE scores, respectively, were each significantly associated with time to first readmission or death (Table 2). Specifically, a 4‐point increase in the B‐PREPARED score was associated with a 16% decrease in the hazard of readmission or death (hazard ratio [HR] = 0.84, 95% confidence interval [CI]: 0.72 to 0.97). A 5‐point increase in the LACE score was associated with a 100% increase in the hazard of readmission or death (HR = 2.00, 95% CI: 1.72 to 2.32). In the multivariable model with both preparedness scores and diagnosis (model 4), the B‐PREPARED score (HR = 0.82, 95% CI: 0.70 to 0.97) was significantly associated with time to first readmission or death. In the full 30‐day model including B‐PREPARED, CTM‐3, LACE, age, gender, race, and diagnosis (model 5), only the LACE score (HR = 1.83, 95% CI: 1.54 to 2.18) was independently associated with time to readmission or death. Finally, the CTM‐3 did not predict 30‐day readmission or death in any of the models tested.

Cox Models: Time to Death or Readmission Within 30 Days of Index Hospitalization
Models HR (95% CI)* P Value C Index
  • NOTE: Abbreviations: ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services); CI, confidence interval; CTM‐3, Care Transitions Measure‐3; HR, hazard ratio; LACE, Length of hospital stay, Acuity of event, Comorbidities, and Emergency department visits in the prior 6 months.

1. CTM (per 10‐point change) 0.95 (0.88 to 1.03) 0.257 0.523
2. B‐PREPARED (per 4‐point change) 0.84 (0.72 to 0.97) 0.017 0.537
3. LACE (per 5‐point change) 2.00 (1.72 to 2.32) <0.001 0.679
4. CTM (per 10‐point change) 1.00 (0.92 to 1.10) 0.935 0.620
B‐PREPARED (per 4‐point change) 0.82 (0.70 to 0.97) 0.019
ADHF only (vs ACS only) 2.46 (1.86 to 3.26) <0.001
ADHF and ACS (vs ACS only) 1.42 (0.84 to 2.42) 0.191
5. CTM (per 10‐point change) 1.02 (0.93 to 1.11) 0.722 0.692
B‐PREPARED (per 4 point change) 0.87 (0.74 to 1.03) 0.106
LACE (per 5‐point change) 1.83 (1.54 to 2.18) <0.001
ADHF only (vs ACS only) 1.51 (1.10 to 2.08) 0.010
ADHF and ACS (vs ACS only) 0.90 (0.52 to 1.55) 0.690
Age (per 10‐year change) 1.02 (0.92 to 1.14) 0.669
Female (vs male) 1.11 (0.85 to 1.46) 0.438
Nonwhite (vs white) 0.92 (0.64 to 1.30) 0.624

Outcomes Within 90 Days of Discharge

At 90 days after discharge, again the separate unadjusted models 2 and 3 demonstrated that the B‐PREPARED and LACE scores, respectively, were each significantly associated with time to first readmission or death, whereas the CTM‐3 model only showed marginal significance (Table 3). In the multivariable model with both preparedness scores and diagnosis (model 4), results were similar to 30 days as the B‐PREPARED score was significantly associated with time to first readmission or death. Lastly, in the full model (model 5) at 90 days, again the LACE score was significantly associated with time to first readmission or death. In addition, B‐PREPARED scores were associated with a significant decrease in risk of readmission or death (HR = 0.88, 95% CI: 0.78 to 1.00); CTM‐3 scores were not independently associated with outcomes.

Cox Models: Time to Death or Readmission Within 90 Days of Index Hospitalization
Model HR (95% CI)* P Value C Index
  • NOTE: Abbreviations: ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services); CI, confidence interval; CTM‐3, Care Transitions Measure‐3; HR, hazard ratio; LACE, Length of hospital stay, Acuity of event, Comorbidities, and Emergency department visits in the prior 6 months.

1. CTM (per 10‐point change) 0.94 (0.89 to 1.00) 0.051 0.526
2. B‐PREPARED (per 4‐point change) 0.84 (0.75 to 0.94) 0.002 0.533
3. LACE (per 5‐point change) 2.03 (1.82 to 2.27) <0.001 0.683
4. CTM (per 10‐point change) 0.99 (0.93 to 1.06) 0.759 0.640
B‐PREPARED (per 4‐point change) 0.83 (0.74 to 0.94) 0.003
ADHF only (vs ACS only) 2.88 (2.33 to 3.56) <0.001
ADHF and ACS (vs ACS only) 1.62 (1.11 to 2.38) 0.013
5. CTM (per 10‐point change) 1.00 (0.94 to 1.07) 0.932 0.698
B‐PREPARED (per 4‐point change) 0.88 (0.78 to 1.00) 0.043
LACE (per 5‐point change) 1.76 (1.55 to 2.00) <0.001
ADHF only (vs ACS only) 1.76 (1.39 to 2.24) <0.001
ADHF and ACS (vs ACS only) 1.00 (0.67 to 1.50) 0.980
Age (per 10‐year change) 1.00 (0.93 to 1.09) 0.894
Female (vs male) 1.10 (0.90 to 1.35) 0.341
Nonwhite (vs white) 1.14 (0.89 to 1.47) 0.288

Tables 2 and 3 also display the C indices, or the discriminative ability of the models to differentiate whether or not a patient was readmitted or died. The range of the C index is 0.5 to 1, where values closer to 0.5 indicate random predictions and values closer to 1 indicate perfect prediction. At 30 days, the individual C indices for B‐PREPARED and CTM‐3 were only slightly better than chance (0.54 and 0.52, respectively) in their discriminative abilities. However, the C indices for the LACE score alone (0.68) and the multivariable model (0.69) including all 3 measures (ie, B‐PREPARED, CTM‐3, LACE), and clinical and demographic variables, had higher utility in discriminating patients who were readmitted/died or not. The 90‐day C indices were comparable in magnitude to those at 30 days.

DISCUSSION/CONCLUSION

In this cohort of patients hospitalized with cardiovascular disease, we compared 2 patient‐reported measures of preparedness for discharge, their association with time to death or readmission at 30 and 90 days, and their ability to discriminate patients who were or were not readmitted or died. Higher preparedness as measured by higher B‐PREPARED scores was associated with lower risk of readmission or death at 30 and 90 days after discharge in unadjusted models, and at 90 days in adjusted models. CTM‐3 was not associated with the outcome in any analyses. Lastly, the individual preparedness measures were not as strongly associated with readmission or death compared to the LACE readmission index alone.

How do our findings relate to the measurement of care transition quality? We consider 2 scenarios. First, if hospitals utilize the LACE index to predict readmission, then neither self‐reported measure of preparedness adds meaningfully to its predictive ability. However, hospital management may still find the B‐PREPARED and CTM‐3 useful as a means to direct care transition quality‐improvement efforts. These measures can instruct hospitals as to what areas their patients express the greatest difficulty or lack of preparedness and closely attend to patient needs with appropriate resources. Furthermore, the patient's perception of being prepared for discharge may be different than their actual preparedness. Their perceived preparedness may be affected by cognitive impairment, dissatisfaction with medical care, depression, lower health‐related quality of life, and lower educational attainment as demonstrated by Lau et al.[16] If a patient's perception of preparedness were low, it would behoove the clinician to investigate these other issues and address those that are mutable. Additionally, perceived preparedness may not correlate with the patient's understanding of their medical conditions, so it is imperative that clinicians provide prospective guidance about their probable postdischarge trajectory. If hospitals are not utilizing the LACE index, then perhaps using the B‐PREPARED, but not the CTM‐3, may be beneficial for predicting readmission.

How do our results fit with evidence from prior studies, and what do they mean in the context of care transitions quality? First, in the psychometric evaluation of the B‐PREPARED measure in a cohort of recently hospitalized patients, the mean score was 17.3, lower than the median of 21 in our cohort.[3] Numerous studies have utilized the CTM‐3 and the longer‐version CTM‐15. Though we cannot make a direct comparison, the median in our cohort (77.8) was on par with the means from other studies, which ranged from 63 to 82.[5, 17, 18, 19] Several studies also note ceiling effects with clusters of scores at the upper end of the scale, as did we. We conjecture that our cohort's preparedness scores may be higher because our institution has made concerted efforts to improve the discharge education for cardiovascular patients.

In a comparable patient population, the TRACE‐CORE (Transitions, Risks, and Actions in Coronary Events Center for Outcomes Research and Education) study is a cohort of more than 2200 patients with ACS who were administered the CTM‐15 within 1 month of discharge.[8] In that study, the median CTM‐15 score was 66.6, which is lower than our cohort. With regard to the predictive ability of the CTM‐3, they note that CTM‐3 scores did not differentiate between patients who were or were not readmitted or had emergency department visits. Our results support their concern that the CTM‐15 and by extension the CTM‐3, though adopted widely as part of HCAHPS, may not have sufficient ability to discriminate differences in patient outcomes or the quality of care transitions.

More recently, patient‐reported preparedness for discharge was assessed in a prospective cohort in Canada.[16] Lau et al. administered a single‐item measure of readiness at the time of discharge to general medicine patients, and found that lower readiness scores were also not associated with readmission or death at 30 days, when adjusted for the LACE index as we did.

We must acknowledge the limitations of our findings. First, our sample of recently discharged patients with cardiovascular disease is different than the community‐dwelling, underserved Americans hospitalized in the prior year, which served as the sample for reducing the CTM‐15 to 3 items.[5] This fact may explain why we did not find the CTM‐3 to be associated with readmission in our sample. Second, our analyses did not include extensive adjustment for patient‐related factors. Rather, our intention was to see how well the preparedness measures performed independently and compare their abilities to predict readmission, which is particularly relevant for clinicians who may not have all possible covariates in predicting readmission. Finally, because we limited the analyses to the patients who completed the B‐PREPARED and CTM‐3 measures (88% completion rate), we may not have data for: (1) very ill patients, who had a higher risk of readmission and least prepared, and were not able to answer the postdischarge phone call; and (2) very functional patients, who had a lower risk of readmission and were too busy to answer the postdischarge phone call. This may have limited the extremes in the spectrum of our sample.

Importantly, our study has several strengths. We report on the largest sample to date with results of both B‐PREPARED and CTM‐3. Moreover, we examined how these measures compared to a widely used readmission prediction tool, the LACE index. We had very high postdischarge phone call completion rates in the week following discharge. Furthermore, we had thorough assessment of readmission data through patient report, electronic medical record documentation, and collection of outside medical records.

Further research is needed to elucidate: (1) the ideal administration time of the patient‐reported measures of preparedness (before or after discharge), and (2) the challenges to the implementation of measures in healthcare systems. Remaining research questions center on the tradeoffs and barriers to implementing a longer measure like the 11‐item B‐PREPARED compared to a shorter measure like the CTM‐3. We do not know whether longer measures preclude their use by busy clinicians, though it provides more specific information about what patients feel they need at hospital discharge. Additionally, studies need to demonstrate the mutability of preparedness and the response of measures to interventions designed to improve the hospital discharge process.

In our sample of recently hospitalized cardiovascular patients, there was a statistically significant association between patient‐reported preparedness for discharged, as measured by B‐PREPARED, and readmissions/death at 30 and 90 days, but the magnitude of the association was very small. Furthermore, another patient‐reported preparedness measure, CTM‐3, was not associated with readmissions or death at either 30 or 90 days. Lastly, neither measure discriminated well between patients who were readmitted or not, and neither measure added meaningfully to the LACE index in terms of predicting 30‐ or 90‐day readmissions.

Disclosures

This study was supported by grant R01 HL109388 from the National Heart, Lung, and Blood Institute (Dr. Kripalani) and in part by grant UL1 RR024975‐01 from the National Center for Research Resources, and grant 2 UL1 TR000445‐06 from the National Center for Advancing Translational Sciences. Dr. Kripalani is a consultant to SAI Interactive and holds equity in Bioscape Digital, and is a consultant to and holds equity in PictureRx, LLC. Dr. Bell is supported by the National Institutes of Health (K23AG048347) and by the Eisenstein Women's Heart Fund. Dr. Vasilevskis is supported by the National Institutes of Health (K23AG040157) and the Geriatric Research, Education and Clinical Center. Dr. Mixon is a Veterans Affairs Health Services Research and Development Service Career Development awardee (12‐168) at the Nashville Department of Veterans Affairs. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funding agency was not involved in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. All authors had full access to all study data and had a significant role in writing the manuscript. The contents do not represent the views of the US Department of Veterans Affairs or the United States government. Dr. Kripalani is a consultant to and holds equity in PictureRx, LLC.

Files
References
  1. Centers for Medicare 9(9):598603.
  2. Graumlich JF, Novotny NL, Aldag JC. Brief scale measuring patient preparedness for hospital discharge to home: psychometric properties. J Hosp Med. 2008;3(6):446454.
  3. Coleman EA, Mahoney E, Parry C. Assessing the quality of preparation for posthospital care from the patient's perspective: the care transitions measure. Med Care. 2005;43(3):246255.
  4. Parry C, Mahoney E, Chalmers SA, Coleman EA. Assessing the quality of transitional care: further applications of the care transitions measure. Med Care. 2008;46(3):317322.
  5. Coleman EA, Parry C, Chalmers SA, Chugh A, Mahoney E. The central role of performance measurement in improving the quality of transitional care. Home Health Care Serv Q. 2007;26(4):93104.
  6. Centers for Medicare 3:e001053.
  7. Kansagara D, Englander H, Salanitro AH, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  8. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  9. Wang H, Robinson RD, Johnson C, et al. Using the LACE index to predict hospital readmissions in congestive heart failure patients. BMC Cardiovasc Disord. 2014;14:97.
  10. Spiva L, Hand M, VanBrackle L, McVay F. Validation of a predictive model to identify patients at high risk for hospital readmission. J Healthc Qual. 2016;38(1):3441.
  11. Meyers AG, Salanitro A, Wallston KA, et al. Determinants of health after hospital discharge: rationale and design of the Vanderbilt Inpatient Cohort Study (VICS). BMC Health Serv Res. 2014;14:10.
  12. Coleman EA. CTM frequently asked questions. Available at: http://caretransitions.org/tools-and-resources/. Accessed January 22, 2016.
  13. Coleman EA. Instructions for scoring the CTM‐3. Available at: http://caretransitions.org/tools-and-resources/. Accessed January 22, 2016.
  14. Lau D, Padwal RS, Majumdar SR, et al. Patient‐reported discharge readiness and 30‐day risk of readmission or death: a prospective cohort study. Am J Med. 2016;129:8995.
  15. Parrish MM, O'Malley K, Adams RI, Adams SR, Coleman EA. Implementaiton of the Care Transitions Intervention: sustainability and lessons learned. Prof Case Manag. 2009;14(6):282293.
  16. Englander H, Michaels L, Chan B, Kansagara D. The care transitions innovation (C‐TraIn) for socioeconomically disadvantaged adults: results of a cluster randomized controlled trial. J Gen Intern Med. 2014;29(11):14601467.
  17. Record JD, Niranjan‐Azadi A, Christmas C, et al. Telephone calls to patients after discharge from the hospital: an important part of transitions of care. Med Educ Online. 2015;29(20):26701.
Article PDF
Issue
Journal of Hospital Medicine - 11(9)
Page Number
603-609
Sections
Files
Files
Article PDF
Article PDF

In recent years, US hospitals have focused on decreasing readmission rates, incented by reimbursement penalties to hospitals having excessive readmissions.[1] Gaps in the quality of care provided during transitions likely contribute to preventable readmissions.[2] One compelling quality assessment in this setting is measuring patients' discharge preparedness, using key dimensions such as understanding their instructions for medication use and follow‐up. Patient‐reported preparedness for discharge may also be useful to identify risk of readmission.

Several patient‐reported measures of preparedness for discharge exist, and herein we describe 2 measures of interest. First, the Brief‐PREPARED (B‐PREPARED) measure was derived from the longer PREPARED instrument (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services), which reflects the patient's perceived needs at discharge. In previous research, the B‐PREPARED measure predicted emergency department (ED) visits for patients who had been recently hospitalized and had a high risk for readmission.[3] Second, the Care Transitions Measure‐3 (CTM‐3) was developed by Coleman et al. as a patient‐reported measure to discriminate between patients who were more likely to have an ED visit or readmission from those who did not. CTM‐3 has also been used to evaluate hospitals' level of care coordination and for public reporting purposes.[4, 5, 6] It has been endorsed by the National Quality Forum and incorporated into the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey provided to samples of recently hospitalized US patients.[7] However, recent evidence from an inpatient cohort of cardiovascular patients suggests the CTM‐3 overinflates care transition scores compared to the longer 15‐item CTM. In that cohort, the CTM‐3 could not differentiate between patients who did or did not have repeat ED visits or readmission.[8] Thus far, the B‐PREPARED and CTM‐3 measures have not been compared to one another directly.

In addition to the development of patient‐reported measures, hospitals increasingly employ administrative algorithms to predict likelihood of readmission.[9] A commonly used measure is the LACE index (Length of stay, Acuity, Comorbidity, and Emergency department use).[10] The LACE index predicted readmission and death within 30 days of discharge in a large cohort in Canada. In 2 retrospective studies of recently hospitalized patients in the United States, the LACE index's ability to discriminate between patients readmitted or not ranged from slightly better than chance to moderate (C statistic 0.56‐0.77).[11, 12]

It is unknown whether adding patient‐reported preparedness measures to commonly used readmission prediction scores increases the ability to predict readmission risk. We sought to determine whether the B‐PREPARED and CTM‐3 measures were predictive of readmission or death, as compared to the LACE index, in a large cohort of cardiovascular patients. In addition, we sought to determine the additional predictive and discriminative ability gained from administering the B‐PREPARED and CTM‐3 measures, while adjusting for the LACE index and other clinical factors. We hypothesized that: (1) higher preparedness scores on both measures would predict lower risk of readmission or death in a cohort of patients hospitalized with cardiac diagnoses; and (2) because it provides more specific and actionable information, the B‐PREPARED would discriminate readmission more accurately than CTM‐3, after controlling for clinical factors.

METHODS

Study Setting and Design

The Vanderbilt Inpatient Cohort Study (VICS) is a prospective study of patients admitted with cardiovascular disease to Vanderbilt University Hospital. The purpose of VICS is to investigate the impact of patient and social factors on postdischarge health outcomes such as quality of life, unplanned hospital utilization, and mortality. The rationale and design of VICS are detailed elsewhere.[13] Briefly, participants completed a baseline interview while hospitalized, and follow‐up phone calls were conducted within 2 to 9 days and at approximately 30 and 90 days postdischarge. During the first follow‐up call conducted by research assistants, we collected preparedness for discharge data utilizing the 2 measures described below. After the 90‐day phone call, we collected healthcare utilization since the index admission. The study was approved by the Vanderbilt University Institutional Review Board.

Patients

Eligibility screening shortly after admission identified patients with acute decompensated heart failure (ADHF) and/or an intermediate or high likelihood of acute coronary syndrome (ACS) per a physician's review of the clinical record. Exclusion criteria included: age <18 years, non‐English speaker, unstable psychiatric illness, delirium, low likelihood of follow‐up (eg, no reliable telephone number), on hospice, or otherwise too ill to complete an interview. To be included in these analyses, patients must have completed the preparedness for discharge measurements during the first follow‐up call. Patients who died before discharge or before completing the follow‐up call were excluded.

Preparedness for Discharge Measures (Patient‐Reported Data)

Preparedness for discharge was assessed using the 11‐item B‐PREPARED and the 3‐item CTM‐3.

The B‐PREPARED measures how prepared patients felt leaving the hospital with regard to: self‐care information for medications and activity, equipment/community services needed, and confidence in managing one's health after hospitalization. The B‐PREPARED measure has good internal consistency reliability (Cronbach's = 0.76) and has been validated in patients of varying age within a week of discharge. Preparedness is the sum of responses to all 11 questions, with a range of 0 to 22. Higher scores reflect increased preparedness for discharge.[3]

The CTM‐3 asks patients to rate how well their preferences were considered regarding transitional needs, as well as their understanding of postdischarge self‐management and the purpose of their medications, each on a 4‐point response scale (strongly disagree to strongly agree). The sum of the 3 responses quantifies the patient's perception of the quality of the care transition at discharge (Cronbach's = 0.86,[14] 0.92 in a cohort similar to ours[8]). Scores range from 3 to 12, with higher score indicating more preparedness. Then, the sum is transformed to a 0 to 100 scale.[15]

Clinical Readmission Risk Measures (Medical Record Data)

The LACE index, published by Van Walraven et al.,[10] takes into account 4 categories of clinical data: length of hospital stay, acuity of event, comorbidities, and ED visits in the prior 6 months. More specifically, a diagnostic code‐based, modified version of the Charlson Comorbidity Index was used to calculate the comorbidity score. These clinical criteria were obtained from an administrative database and weighted according to the methods used by Van Walraven et al. An overall score was calculated on a scale of 0 to 19, with higher scores indicating higher risk of readmission or death within 30 days.

From medical records, we also collected patients' demographic data including age, race, and gender, and diagnosis of ACS, ADHF, or both at hospital admission.

Outcome Measures

Healthcare utilization data were obtained from the index hospital as well as outside facilities. The electronic medical records from Vanderbilt University Hospital provided information about healthcare utilization at Vanderbilt 90 days after initial discharge. We also used Vanderbilt records to see if patients were transferred to Vanderbilt from other hospitals or if patients visited other hospitals before or after enrollment. We supplemented this with patient self‐report during the follow‐up telephone calls (at 30 and 90 days after initial discharge) so that any additional ED and hospital visits could be captured. Mortality data were collected from medical records, Social Security data, and family reports. The main outcome was time to first unplanned hospital readmission or death within 30 and 90 days of discharge.

Analysis

To describe our sample, we summarized categorical variables with percentages and continuous variables with percentiles. To test for evidence of unadjusted covariate‐outcome relationships, we used Pearson 2 and Wilcoxon rank sum tests for categorical and continuous covariates, respectively.

For the primary analyses we used Cox proportional hazard models to examine the independent associations between the prespecified predictors for patient‐reported preparedness and time to first unplanned readmission or death within 30 and 90 days of discharge. For each outcome (30‐ and 90‐day readmission or death), we fit marginal models separately for each of the B‐PREPARED, CTM‐3, and LACE scores. We then fit multivariable models that used both preparedness measures as well as age, gender, race, and diagnosis (ADHF and/or ACS), variables available to clinicians when patients are admitted. When fitting the multivariable models, we did not find strong evidence of nonlinear effects; therefore, only linear effects are reported. To facilitate comparison of effects, we scaled continuous variables by their interquartile range (IQR). The associated, exponentiated regression parameter estimates may therefore be interpreted as hazard ratios for readmission or death per IQR change in each predictor. In addition to parameter estimation, we computed the C index to evaluate capacity for the model to discriminate those who were and were not readmitted or died. All analyses were conducted in R version 3.1.2 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

From the cohort of 1239 patients (Figure 1), 64%, 28%, and 7% of patients were hospitalized with ACS, ADHF, or both, respectively (Table 1). Nearly 45% of patients were female, 83% were white, and the median age was 61 years (IQR 5269). The median length of stay was 3 days (IQR 25). The median preparedness scores were high for both B‐PREPARED (21, IQR 1822) and CTM‐3 (77.8, IQR 66.7100). A total of 211 (17%) and 380 (31%) were readmitted or died within 30 and 90 days, respectively. The completion rate for the postdischarge phone calls was 88%.

Patient Characteristics
Death or Readmission Within 30 Days Death or Readmission Within 90 Days
Not Readmitted, N = 1028 Death/Readmitted, N = 211 P Value Not Readmitted, N = 859 Death/Readmitted, N = 380 P Value
  • NOTE: Continuous variables: summarize with the 5th:25th:50th:75th:95th. Categorical variables: summarize with the percentage and (N). Abbreviations: ACS, acute coronary syndromes; ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services) CTM‐3, Care Transitions Measure‐3; LACE, Length of hospital stay, Acuity of event, Comorbidities, and ED visits in the prior 6 months; LOS, length of stay. *Pearson test. Wilcoxon test.

Gender, male 55.8% (574) 53.1% (112) 0.463* 56.3% (484) 53.2% (202) 0.298*
Female 44.2% (454) 46.9% (99) 43.7% (375) 46.8% (178)
Race, white 83.9% (860) 80.6% (170) 0.237* 86.0% (737) 77.3% (293) <0.001*
Race, nonwhite 16.1% (165) 19.4% (41) 14.0% (120) 22.7% (86)
Diagnosis ACS 68.0% (699) 46.4% (98) <0.001* 72.9% (626) 45.0% (171) <0.001*
ADHF 24.8% (255) 46.0% (97) 20.3% (174) 46.8% (178)
Both 7.2% (74) 7.6% (16) 6.9% (59) 8.2% (31)
Age 39.4:52:61:68:80 37.5:53.5:62:70:82 0.301 40:52:61:68:80 38:52:61 :70:82 0.651
LOS 1:2:3:5:10 1:3: 4:7.5:17 <0.001 1:2:3:5:9 1:3:4:7:15 <0.001
CTM‐3 55.6:66.7: 77.8:100:100 55.6:66.7:77.8:100 :100 0.305 55.6:66.7:88.9:100:100 55.6:66.7:77.8:100 :100 0.080
B‐PREPARED 12:18:21:22.:22 10:17:20:22:22 0.066 12:18:21:22:22 10:17:20 :22:22 0.030
LACE 1:4: 7:10 :14 3.5:7:10:13:17 <0.001 1:4:6: 9:14 3:7:10:13:16 <0.001
Figure 1
Study flow diagram. Abbreviations: ACS, acute coronary syndrome; ADHF, acute decompensated heart failure; VICS, Vanderbilt Inpatient Cohort Study.

B‐PREPARED and CTM‐3 were moderately correlated with one another (Spearman's = 0.40, P < 0.001). In bivariate analyses (Table 1), the association between B‐PREPARED and readmission or death was significant at 90 days (P = 0.030) but not 30 days. The CTM‐3 showed no significant association with readmission or death at either time point. The LACE score was significantly associated with rates of readmission at 30 and 90 days (P < 0.001).

Outcomes Within 30 Days of Discharge

When examining readmission or death within 30 days of discharge, simple unadjusted models 2 and 3 showed that the B‐PREPARED and LACE scores, respectively, were each significantly associated with time to first readmission or death (Table 2). Specifically, a 4‐point increase in the B‐PREPARED score was associated with a 16% decrease in the hazard of readmission or death (hazard ratio [HR] = 0.84, 95% confidence interval [CI]: 0.72 to 0.97). A 5‐point increase in the LACE score was associated with a 100% increase in the hazard of readmission or death (HR = 2.00, 95% CI: 1.72 to 2.32). In the multivariable model with both preparedness scores and diagnosis (model 4), the B‐PREPARED score (HR = 0.82, 95% CI: 0.70 to 0.97) was significantly associated with time to first readmission or death. In the full 30‐day model including B‐PREPARED, CTM‐3, LACE, age, gender, race, and diagnosis (model 5), only the LACE score (HR = 1.83, 95% CI: 1.54 to 2.18) was independently associated with time to readmission or death. Finally, the CTM‐3 did not predict 30‐day readmission or death in any of the models tested.

Cox Models: Time to Death or Readmission Within 30 Days of Index Hospitalization
Models HR (95% CI)* P Value C Index
  • NOTE: Abbreviations: ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services); CI, confidence interval; CTM‐3, Care Transitions Measure‐3; HR, hazard ratio; LACE, Length of hospital stay, Acuity of event, Comorbidities, and Emergency department visits in the prior 6 months.

1. CTM (per 10‐point change) 0.95 (0.88 to 1.03) 0.257 0.523
2. B‐PREPARED (per 4‐point change) 0.84 (0.72 to 0.97) 0.017 0.537
3. LACE (per 5‐point change) 2.00 (1.72 to 2.32) <0.001 0.679
4. CTM (per 10‐point change) 1.00 (0.92 to 1.10) 0.935 0.620
B‐PREPARED (per 4‐point change) 0.82 (0.70 to 0.97) 0.019
ADHF only (vs ACS only) 2.46 (1.86 to 3.26) <0.001
ADHF and ACS (vs ACS only) 1.42 (0.84 to 2.42) 0.191
5. CTM (per 10‐point change) 1.02 (0.93 to 1.11) 0.722 0.692
B‐PREPARED (per 4 point change) 0.87 (0.74 to 1.03) 0.106
LACE (per 5‐point change) 1.83 (1.54 to 2.18) <0.001
ADHF only (vs ACS only) 1.51 (1.10 to 2.08) 0.010
ADHF and ACS (vs ACS only) 0.90 (0.52 to 1.55) 0.690
Age (per 10‐year change) 1.02 (0.92 to 1.14) 0.669
Female (vs male) 1.11 (0.85 to 1.46) 0.438
Nonwhite (vs white) 0.92 (0.64 to 1.30) 0.624

Outcomes Within 90 Days of Discharge

At 90 days after discharge, again the separate unadjusted models 2 and 3 demonstrated that the B‐PREPARED and LACE scores, respectively, were each significantly associated with time to first readmission or death, whereas the CTM‐3 model only showed marginal significance (Table 3). In the multivariable model with both preparedness scores and diagnosis (model 4), results were similar to 30 days as the B‐PREPARED score was significantly associated with time to first readmission or death. Lastly, in the full model (model 5) at 90 days, again the LACE score was significantly associated with time to first readmission or death. In addition, B‐PREPARED scores were associated with a significant decrease in risk of readmission or death (HR = 0.88, 95% CI: 0.78 to 1.00); CTM‐3 scores were not independently associated with outcomes.

Cox Models: Time to Death or Readmission Within 90 Days of Index Hospitalization
Model HR (95% CI)* P Value C Index
  • NOTE: Abbreviations: ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services); CI, confidence interval; CTM‐3, Care Transitions Measure‐3; HR, hazard ratio; LACE, Length of hospital stay, Acuity of event, Comorbidities, and Emergency department visits in the prior 6 months.

1. CTM (per 10‐point change) 0.94 (0.89 to 1.00) 0.051 0.526
2. B‐PREPARED (per 4‐point change) 0.84 (0.75 to 0.94) 0.002 0.533
3. LACE (per 5‐point change) 2.03 (1.82 to 2.27) <0.001 0.683
4. CTM (per 10‐point change) 0.99 (0.93 to 1.06) 0.759 0.640
B‐PREPARED (per 4‐point change) 0.83 (0.74 to 0.94) 0.003
ADHF only (vs ACS only) 2.88 (2.33 to 3.56) <0.001
ADHF and ACS (vs ACS only) 1.62 (1.11 to 2.38) 0.013
5. CTM (per 10‐point change) 1.00 (0.94 to 1.07) 0.932 0.698
B‐PREPARED (per 4‐point change) 0.88 (0.78 to 1.00) 0.043
LACE (per 5‐point change) 1.76 (1.55 to 2.00) <0.001
ADHF only (vs ACS only) 1.76 (1.39 to 2.24) <0.001
ADHF and ACS (vs ACS only) 1.00 (0.67 to 1.50) 0.980
Age (per 10‐year change) 1.00 (0.93 to 1.09) 0.894
Female (vs male) 1.10 (0.90 to 1.35) 0.341
Nonwhite (vs white) 1.14 (0.89 to 1.47) 0.288

Tables 2 and 3 also display the C indices, or the discriminative ability of the models to differentiate whether or not a patient was readmitted or died. The range of the C index is 0.5 to 1, where values closer to 0.5 indicate random predictions and values closer to 1 indicate perfect prediction. At 30 days, the individual C indices for B‐PREPARED and CTM‐3 were only slightly better than chance (0.54 and 0.52, respectively) in their discriminative abilities. However, the C indices for the LACE score alone (0.68) and the multivariable model (0.69) including all 3 measures (ie, B‐PREPARED, CTM‐3, LACE), and clinical and demographic variables, had higher utility in discriminating patients who were readmitted/died or not. The 90‐day C indices were comparable in magnitude to those at 30 days.

DISCUSSION/CONCLUSION

In this cohort of patients hospitalized with cardiovascular disease, we compared 2 patient‐reported measures of preparedness for discharge, their association with time to death or readmission at 30 and 90 days, and their ability to discriminate patients who were or were not readmitted or died. Higher preparedness as measured by higher B‐PREPARED scores was associated with lower risk of readmission or death at 30 and 90 days after discharge in unadjusted models, and at 90 days in adjusted models. CTM‐3 was not associated with the outcome in any analyses. Lastly, the individual preparedness measures were not as strongly associated with readmission or death compared to the LACE readmission index alone.

How do our findings relate to the measurement of care transition quality? We consider 2 scenarios. First, if hospitals utilize the LACE index to predict readmission, then neither self‐reported measure of preparedness adds meaningfully to its predictive ability. However, hospital management may still find the B‐PREPARED and CTM‐3 useful as a means to direct care transition quality‐improvement efforts. These measures can instruct hospitals as to what areas their patients express the greatest difficulty or lack of preparedness and closely attend to patient needs with appropriate resources. Furthermore, the patient's perception of being prepared for discharge may be different than their actual preparedness. Their perceived preparedness may be affected by cognitive impairment, dissatisfaction with medical care, depression, lower health‐related quality of life, and lower educational attainment as demonstrated by Lau et al.[16] If a patient's perception of preparedness were low, it would behoove the clinician to investigate these other issues and address those that are mutable. Additionally, perceived preparedness may not correlate with the patient's understanding of their medical conditions, so it is imperative that clinicians provide prospective guidance about their probable postdischarge trajectory. If hospitals are not utilizing the LACE index, then perhaps using the B‐PREPARED, but not the CTM‐3, may be beneficial for predicting readmission.

How do our results fit with evidence from prior studies, and what do they mean in the context of care transitions quality? First, in the psychometric evaluation of the B‐PREPARED measure in a cohort of recently hospitalized patients, the mean score was 17.3, lower than the median of 21 in our cohort.[3] Numerous studies have utilized the CTM‐3 and the longer‐version CTM‐15. Though we cannot make a direct comparison, the median in our cohort (77.8) was on par with the means from other studies, which ranged from 63 to 82.[5, 17, 18, 19] Several studies also note ceiling effects with clusters of scores at the upper end of the scale, as did we. We conjecture that our cohort's preparedness scores may be higher because our institution has made concerted efforts to improve the discharge education for cardiovascular patients.

In a comparable patient population, the TRACE‐CORE (Transitions, Risks, and Actions in Coronary Events Center for Outcomes Research and Education) study is a cohort of more than 2200 patients with ACS who were administered the CTM‐15 within 1 month of discharge.[8] In that study, the median CTM‐15 score was 66.6, which is lower than our cohort. With regard to the predictive ability of the CTM‐3, they note that CTM‐3 scores did not differentiate between patients who were or were not readmitted or had emergency department visits. Our results support their concern that the CTM‐15 and by extension the CTM‐3, though adopted widely as part of HCAHPS, may not have sufficient ability to discriminate differences in patient outcomes or the quality of care transitions.

More recently, patient‐reported preparedness for discharge was assessed in a prospective cohort in Canada.[16] Lau et al. administered a single‐item measure of readiness at the time of discharge to general medicine patients, and found that lower readiness scores were also not associated with readmission or death at 30 days, when adjusted for the LACE index as we did.

We must acknowledge the limitations of our findings. First, our sample of recently discharged patients with cardiovascular disease is different than the community‐dwelling, underserved Americans hospitalized in the prior year, which served as the sample for reducing the CTM‐15 to 3 items.[5] This fact may explain why we did not find the CTM‐3 to be associated with readmission in our sample. Second, our analyses did not include extensive adjustment for patient‐related factors. Rather, our intention was to see how well the preparedness measures performed independently and compare their abilities to predict readmission, which is particularly relevant for clinicians who may not have all possible covariates in predicting readmission. Finally, because we limited the analyses to the patients who completed the B‐PREPARED and CTM‐3 measures (88% completion rate), we may not have data for: (1) very ill patients, who had a higher risk of readmission and least prepared, and were not able to answer the postdischarge phone call; and (2) very functional patients, who had a lower risk of readmission and were too busy to answer the postdischarge phone call. This may have limited the extremes in the spectrum of our sample.

Importantly, our study has several strengths. We report on the largest sample to date with results of both B‐PREPARED and CTM‐3. Moreover, we examined how these measures compared to a widely used readmission prediction tool, the LACE index. We had very high postdischarge phone call completion rates in the week following discharge. Furthermore, we had thorough assessment of readmission data through patient report, electronic medical record documentation, and collection of outside medical records.

Further research is needed to elucidate: (1) the ideal administration time of the patient‐reported measures of preparedness (before or after discharge), and (2) the challenges to the implementation of measures in healthcare systems. Remaining research questions center on the tradeoffs and barriers to implementing a longer measure like the 11‐item B‐PREPARED compared to a shorter measure like the CTM‐3. We do not know whether longer measures preclude their use by busy clinicians, though it provides more specific information about what patients feel they need at hospital discharge. Additionally, studies need to demonstrate the mutability of preparedness and the response of measures to interventions designed to improve the hospital discharge process.

In our sample of recently hospitalized cardiovascular patients, there was a statistically significant association between patient‐reported preparedness for discharged, as measured by B‐PREPARED, and readmissions/death at 30 and 90 days, but the magnitude of the association was very small. Furthermore, another patient‐reported preparedness measure, CTM‐3, was not associated with readmissions or death at either 30 or 90 days. Lastly, neither measure discriminated well between patients who were readmitted or not, and neither measure added meaningfully to the LACE index in terms of predicting 30‐ or 90‐day readmissions.

Disclosures

This study was supported by grant R01 HL109388 from the National Heart, Lung, and Blood Institute (Dr. Kripalani) and in part by grant UL1 RR024975‐01 from the National Center for Research Resources, and grant 2 UL1 TR000445‐06 from the National Center for Advancing Translational Sciences. Dr. Kripalani is a consultant to SAI Interactive and holds equity in Bioscape Digital, and is a consultant to and holds equity in PictureRx, LLC. Dr. Bell is supported by the National Institutes of Health (K23AG048347) and by the Eisenstein Women's Heart Fund. Dr. Vasilevskis is supported by the National Institutes of Health (K23AG040157) and the Geriatric Research, Education and Clinical Center. Dr. Mixon is a Veterans Affairs Health Services Research and Development Service Career Development awardee (12‐168) at the Nashville Department of Veterans Affairs. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funding agency was not involved in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. All authors had full access to all study data and had a significant role in writing the manuscript. The contents do not represent the views of the US Department of Veterans Affairs or the United States government. Dr. Kripalani is a consultant to and holds equity in PictureRx, LLC.

In recent years, US hospitals have focused on decreasing readmission rates, incented by reimbursement penalties to hospitals having excessive readmissions.[1] Gaps in the quality of care provided during transitions likely contribute to preventable readmissions.[2] One compelling quality assessment in this setting is measuring patients' discharge preparedness, using key dimensions such as understanding their instructions for medication use and follow‐up. Patient‐reported preparedness for discharge may also be useful to identify risk of readmission.

Several patient‐reported measures of preparedness for discharge exist, and herein we describe 2 measures of interest. First, the Brief‐PREPARED (B‐PREPARED) measure was derived from the longer PREPARED instrument (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services), which reflects the patient's perceived needs at discharge. In previous research, the B‐PREPARED measure predicted emergency department (ED) visits for patients who had been recently hospitalized and had a high risk for readmission.[3] Second, the Care Transitions Measure‐3 (CTM‐3) was developed by Coleman et al. as a patient‐reported measure to discriminate between patients who were more likely to have an ED visit or readmission from those who did not. CTM‐3 has also been used to evaluate hospitals' level of care coordination and for public reporting purposes.[4, 5, 6] It has been endorsed by the National Quality Forum and incorporated into the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey provided to samples of recently hospitalized US patients.[7] However, recent evidence from an inpatient cohort of cardiovascular patients suggests the CTM‐3 overinflates care transition scores compared to the longer 15‐item CTM. In that cohort, the CTM‐3 could not differentiate between patients who did or did not have repeat ED visits or readmission.[8] Thus far, the B‐PREPARED and CTM‐3 measures have not been compared to one another directly.

In addition to the development of patient‐reported measures, hospitals increasingly employ administrative algorithms to predict likelihood of readmission.[9] A commonly used measure is the LACE index (Length of stay, Acuity, Comorbidity, and Emergency department use).[10] The LACE index predicted readmission and death within 30 days of discharge in a large cohort in Canada. In 2 retrospective studies of recently hospitalized patients in the United States, the LACE index's ability to discriminate between patients readmitted or not ranged from slightly better than chance to moderate (C statistic 0.56‐0.77).[11, 12]

It is unknown whether adding patient‐reported preparedness measures to commonly used readmission prediction scores increases the ability to predict readmission risk. We sought to determine whether the B‐PREPARED and CTM‐3 measures were predictive of readmission or death, as compared to the LACE index, in a large cohort of cardiovascular patients. In addition, we sought to determine the additional predictive and discriminative ability gained from administering the B‐PREPARED and CTM‐3 measures, while adjusting for the LACE index and other clinical factors. We hypothesized that: (1) higher preparedness scores on both measures would predict lower risk of readmission or death in a cohort of patients hospitalized with cardiac diagnoses; and (2) because it provides more specific and actionable information, the B‐PREPARED would discriminate readmission more accurately than CTM‐3, after controlling for clinical factors.

METHODS

Study Setting and Design

The Vanderbilt Inpatient Cohort Study (VICS) is a prospective study of patients admitted with cardiovascular disease to Vanderbilt University Hospital. The purpose of VICS is to investigate the impact of patient and social factors on postdischarge health outcomes such as quality of life, unplanned hospital utilization, and mortality. The rationale and design of VICS are detailed elsewhere.[13] Briefly, participants completed a baseline interview while hospitalized, and follow‐up phone calls were conducted within 2 to 9 days and at approximately 30 and 90 days postdischarge. During the first follow‐up call conducted by research assistants, we collected preparedness for discharge data utilizing the 2 measures described below. After the 90‐day phone call, we collected healthcare utilization since the index admission. The study was approved by the Vanderbilt University Institutional Review Board.

Patients

Eligibility screening shortly after admission identified patients with acute decompensated heart failure (ADHF) and/or an intermediate or high likelihood of acute coronary syndrome (ACS) per a physician's review of the clinical record. Exclusion criteria included: age <18 years, non‐English speaker, unstable psychiatric illness, delirium, low likelihood of follow‐up (eg, no reliable telephone number), on hospice, or otherwise too ill to complete an interview. To be included in these analyses, patients must have completed the preparedness for discharge measurements during the first follow‐up call. Patients who died before discharge or before completing the follow‐up call were excluded.

Preparedness for Discharge Measures (Patient‐Reported Data)

Preparedness for discharge was assessed using the 11‐item B‐PREPARED and the 3‐item CTM‐3.

The B‐PREPARED measures how prepared patients felt leaving the hospital with regard to: self‐care information for medications and activity, equipment/community services needed, and confidence in managing one's health after hospitalization. The B‐PREPARED measure has good internal consistency reliability (Cronbach's = 0.76) and has been validated in patients of varying age within a week of discharge. Preparedness is the sum of responses to all 11 questions, with a range of 0 to 22. Higher scores reflect increased preparedness for discharge.[3]

The CTM‐3 asks patients to rate how well their preferences were considered regarding transitional needs, as well as their understanding of postdischarge self‐management and the purpose of their medications, each on a 4‐point response scale (strongly disagree to strongly agree). The sum of the 3 responses quantifies the patient's perception of the quality of the care transition at discharge (Cronbach's = 0.86,[14] 0.92 in a cohort similar to ours[8]). Scores range from 3 to 12, with higher score indicating more preparedness. Then, the sum is transformed to a 0 to 100 scale.[15]

Clinical Readmission Risk Measures (Medical Record Data)

The LACE index, published by Van Walraven et al.,[10] takes into account 4 categories of clinical data: length of hospital stay, acuity of event, comorbidities, and ED visits in the prior 6 months. More specifically, a diagnostic code‐based, modified version of the Charlson Comorbidity Index was used to calculate the comorbidity score. These clinical criteria were obtained from an administrative database and weighted according to the methods used by Van Walraven et al. An overall score was calculated on a scale of 0 to 19, with higher scores indicating higher risk of readmission or death within 30 days.

From medical records, we also collected patients' demographic data including age, race, and gender, and diagnosis of ACS, ADHF, or both at hospital admission.

Outcome Measures

Healthcare utilization data were obtained from the index hospital as well as outside facilities. The electronic medical records from Vanderbilt University Hospital provided information about healthcare utilization at Vanderbilt 90 days after initial discharge. We also used Vanderbilt records to see if patients were transferred to Vanderbilt from other hospitals or if patients visited other hospitals before or after enrollment. We supplemented this with patient self‐report during the follow‐up telephone calls (at 30 and 90 days after initial discharge) so that any additional ED and hospital visits could be captured. Mortality data were collected from medical records, Social Security data, and family reports. The main outcome was time to first unplanned hospital readmission or death within 30 and 90 days of discharge.

Analysis

To describe our sample, we summarized categorical variables with percentages and continuous variables with percentiles. To test for evidence of unadjusted covariate‐outcome relationships, we used Pearson 2 and Wilcoxon rank sum tests for categorical and continuous covariates, respectively.

For the primary analyses we used Cox proportional hazard models to examine the independent associations between the prespecified predictors for patient‐reported preparedness and time to first unplanned readmission or death within 30 and 90 days of discharge. For each outcome (30‐ and 90‐day readmission or death), we fit marginal models separately for each of the B‐PREPARED, CTM‐3, and LACE scores. We then fit multivariable models that used both preparedness measures as well as age, gender, race, and diagnosis (ADHF and/or ACS), variables available to clinicians when patients are admitted. When fitting the multivariable models, we did not find strong evidence of nonlinear effects; therefore, only linear effects are reported. To facilitate comparison of effects, we scaled continuous variables by their interquartile range (IQR). The associated, exponentiated regression parameter estimates may therefore be interpreted as hazard ratios for readmission or death per IQR change in each predictor. In addition to parameter estimation, we computed the C index to evaluate capacity for the model to discriminate those who were and were not readmitted or died. All analyses were conducted in R version 3.1.2 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

From the cohort of 1239 patients (Figure 1), 64%, 28%, and 7% of patients were hospitalized with ACS, ADHF, or both, respectively (Table 1). Nearly 45% of patients were female, 83% were white, and the median age was 61 years (IQR 5269). The median length of stay was 3 days (IQR 25). The median preparedness scores were high for both B‐PREPARED (21, IQR 1822) and CTM‐3 (77.8, IQR 66.7100). A total of 211 (17%) and 380 (31%) were readmitted or died within 30 and 90 days, respectively. The completion rate for the postdischarge phone calls was 88%.

Patient Characteristics
Death or Readmission Within 30 Days Death or Readmission Within 90 Days
Not Readmitted, N = 1028 Death/Readmitted, N = 211 P Value Not Readmitted, N = 859 Death/Readmitted, N = 380 P Value
  • NOTE: Continuous variables: summarize with the 5th:25th:50th:75th:95th. Categorical variables: summarize with the percentage and (N). Abbreviations: ACS, acute coronary syndromes; ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services) CTM‐3, Care Transitions Measure‐3; LACE, Length of hospital stay, Acuity of event, Comorbidities, and ED visits in the prior 6 months; LOS, length of stay. *Pearson test. Wilcoxon test.

Gender, male 55.8% (574) 53.1% (112) 0.463* 56.3% (484) 53.2% (202) 0.298*
Female 44.2% (454) 46.9% (99) 43.7% (375) 46.8% (178)
Race, white 83.9% (860) 80.6% (170) 0.237* 86.0% (737) 77.3% (293) <0.001*
Race, nonwhite 16.1% (165) 19.4% (41) 14.0% (120) 22.7% (86)
Diagnosis ACS 68.0% (699) 46.4% (98) <0.001* 72.9% (626) 45.0% (171) <0.001*
ADHF 24.8% (255) 46.0% (97) 20.3% (174) 46.8% (178)
Both 7.2% (74) 7.6% (16) 6.9% (59) 8.2% (31)
Age 39.4:52:61:68:80 37.5:53.5:62:70:82 0.301 40:52:61:68:80 38:52:61 :70:82 0.651
LOS 1:2:3:5:10 1:3: 4:7.5:17 <0.001 1:2:3:5:9 1:3:4:7:15 <0.001
CTM‐3 55.6:66.7: 77.8:100:100 55.6:66.7:77.8:100 :100 0.305 55.6:66.7:88.9:100:100 55.6:66.7:77.8:100 :100 0.080
B‐PREPARED 12:18:21:22.:22 10:17:20:22:22 0.066 12:18:21:22:22 10:17:20 :22:22 0.030
LACE 1:4: 7:10 :14 3.5:7:10:13:17 <0.001 1:4:6: 9:14 3:7:10:13:16 <0.001
Figure 1
Study flow diagram. Abbreviations: ACS, acute coronary syndrome; ADHF, acute decompensated heart failure; VICS, Vanderbilt Inpatient Cohort Study.

B‐PREPARED and CTM‐3 were moderately correlated with one another (Spearman's = 0.40, P < 0.001). In bivariate analyses (Table 1), the association between B‐PREPARED and readmission or death was significant at 90 days (P = 0.030) but not 30 days. The CTM‐3 showed no significant association with readmission or death at either time point. The LACE score was significantly associated with rates of readmission at 30 and 90 days (P < 0.001).

Outcomes Within 30 Days of Discharge

When examining readmission or death within 30 days of discharge, simple unadjusted models 2 and 3 showed that the B‐PREPARED and LACE scores, respectively, were each significantly associated with time to first readmission or death (Table 2). Specifically, a 4‐point increase in the B‐PREPARED score was associated with a 16% decrease in the hazard of readmission or death (hazard ratio [HR] = 0.84, 95% confidence interval [CI]: 0.72 to 0.97). A 5‐point increase in the LACE score was associated with a 100% increase in the hazard of readmission or death (HR = 2.00, 95% CI: 1.72 to 2.32). In the multivariable model with both preparedness scores and diagnosis (model 4), the B‐PREPARED score (HR = 0.82, 95% CI: 0.70 to 0.97) was significantly associated with time to first readmission or death. In the full 30‐day model including B‐PREPARED, CTM‐3, LACE, age, gender, race, and diagnosis (model 5), only the LACE score (HR = 1.83, 95% CI: 1.54 to 2.18) was independently associated with time to readmission or death. Finally, the CTM‐3 did not predict 30‐day readmission or death in any of the models tested.

Cox Models: Time to Death or Readmission Within 30 Days of Index Hospitalization
Models HR (95% CI)* P Value C Index
  • NOTE: Abbreviations: ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services); CI, confidence interval; CTM‐3, Care Transitions Measure‐3; HR, hazard ratio; LACE, Length of hospital stay, Acuity of event, Comorbidities, and Emergency department visits in the prior 6 months.

1. CTM (per 10‐point change) 0.95 (0.88 to 1.03) 0.257 0.523
2. B‐PREPARED (per 4‐point change) 0.84 (0.72 to 0.97) 0.017 0.537
3. LACE (per 5‐point change) 2.00 (1.72 to 2.32) <0.001 0.679
4. CTM (per 10‐point change) 1.00 (0.92 to 1.10) 0.935 0.620
B‐PREPARED (per 4‐point change) 0.82 (0.70 to 0.97) 0.019
ADHF only (vs ACS only) 2.46 (1.86 to 3.26) <0.001
ADHF and ACS (vs ACS only) 1.42 (0.84 to 2.42) 0.191
5. CTM (per 10‐point change) 1.02 (0.93 to 1.11) 0.722 0.692
B‐PREPARED (per 4 point change) 0.87 (0.74 to 1.03) 0.106
LACE (per 5‐point change) 1.83 (1.54 to 2.18) <0.001
ADHF only (vs ACS only) 1.51 (1.10 to 2.08) 0.010
ADHF and ACS (vs ACS only) 0.90 (0.52 to 1.55) 0.690
Age (per 10‐year change) 1.02 (0.92 to 1.14) 0.669
Female (vs male) 1.11 (0.85 to 1.46) 0.438
Nonwhite (vs white) 0.92 (0.64 to 1.30) 0.624

Outcomes Within 90 Days of Discharge

At 90 days after discharge, again the separate unadjusted models 2 and 3 demonstrated that the B‐PREPARED and LACE scores, respectively, were each significantly associated with time to first readmission or death, whereas the CTM‐3 model only showed marginal significance (Table 3). In the multivariable model with both preparedness scores and diagnosis (model 4), results were similar to 30 days as the B‐PREPARED score was significantly associated with time to first readmission or death. Lastly, in the full model (model 5) at 90 days, again the LACE score was significantly associated with time to first readmission or death. In addition, B‐PREPARED scores were associated with a significant decrease in risk of readmission or death (HR = 0.88, 95% CI: 0.78 to 1.00); CTM‐3 scores were not independently associated with outcomes.

Cox Models: Time to Death or Readmission Within 90 Days of Index Hospitalization
Model HR (95% CI)* P Value C Index
  • NOTE: Abbreviations: ADHF, acute decompensated heart failure; B‐PREPARED, Brief PREPARED (Prescriptions, Ready to re‐enter community, Education, Placement, Assurance of safety, Realistic expectations, Empowerment, Directed to appropriate services); CI, confidence interval; CTM‐3, Care Transitions Measure‐3; HR, hazard ratio; LACE, Length of hospital stay, Acuity of event, Comorbidities, and Emergency department visits in the prior 6 months.

1. CTM (per 10‐point change) 0.94 (0.89 to 1.00) 0.051 0.526
2. B‐PREPARED (per 4‐point change) 0.84 (0.75 to 0.94) 0.002 0.533
3. LACE (per 5‐point change) 2.03 (1.82 to 2.27) <0.001 0.683
4. CTM (per 10‐point change) 0.99 (0.93 to 1.06) 0.759 0.640
B‐PREPARED (per 4‐point change) 0.83 (0.74 to 0.94) 0.003
ADHF only (vs ACS only) 2.88 (2.33 to 3.56) <0.001
ADHF and ACS (vs ACS only) 1.62 (1.11 to 2.38) 0.013
5. CTM (per 10‐point change) 1.00 (0.94 to 1.07) 0.932 0.698
B‐PREPARED (per 4‐point change) 0.88 (0.78 to 1.00) 0.043
LACE (per 5‐point change) 1.76 (1.55 to 2.00) <0.001
ADHF only (vs ACS only) 1.76 (1.39 to 2.24) <0.001
ADHF and ACS (vs ACS only) 1.00 (0.67 to 1.50) 0.980
Age (per 10‐year change) 1.00 (0.93 to 1.09) 0.894
Female (vs male) 1.10 (0.90 to 1.35) 0.341
Nonwhite (vs white) 1.14 (0.89 to 1.47) 0.288

Tables 2 and 3 also display the C indices, or the discriminative ability of the models to differentiate whether or not a patient was readmitted or died. The range of the C index is 0.5 to 1, where values closer to 0.5 indicate random predictions and values closer to 1 indicate perfect prediction. At 30 days, the individual C indices for B‐PREPARED and CTM‐3 were only slightly better than chance (0.54 and 0.52, respectively) in their discriminative abilities. However, the C indices for the LACE score alone (0.68) and the multivariable model (0.69) including all 3 measures (ie, B‐PREPARED, CTM‐3, LACE), and clinical and demographic variables, had higher utility in discriminating patients who were readmitted/died or not. The 90‐day C indices were comparable in magnitude to those at 30 days.

DISCUSSION/CONCLUSION

In this cohort of patients hospitalized with cardiovascular disease, we compared 2 patient‐reported measures of preparedness for discharge, their association with time to death or readmission at 30 and 90 days, and their ability to discriminate patients who were or were not readmitted or died. Higher preparedness as measured by higher B‐PREPARED scores was associated with lower risk of readmission or death at 30 and 90 days after discharge in unadjusted models, and at 90 days in adjusted models. CTM‐3 was not associated with the outcome in any analyses. Lastly, the individual preparedness measures were not as strongly associated with readmission or death compared to the LACE readmission index alone.

How do our findings relate to the measurement of care transition quality? We consider 2 scenarios. First, if hospitals utilize the LACE index to predict readmission, then neither self‐reported measure of preparedness adds meaningfully to its predictive ability. However, hospital management may still find the B‐PREPARED and CTM‐3 useful as a means to direct care transition quality‐improvement efforts. These measures can instruct hospitals as to what areas their patients express the greatest difficulty or lack of preparedness and closely attend to patient needs with appropriate resources. Furthermore, the patient's perception of being prepared for discharge may be different than their actual preparedness. Their perceived preparedness may be affected by cognitive impairment, dissatisfaction with medical care, depression, lower health‐related quality of life, and lower educational attainment as demonstrated by Lau et al.[16] If a patient's perception of preparedness were low, it would behoove the clinician to investigate these other issues and address those that are mutable. Additionally, perceived preparedness may not correlate with the patient's understanding of their medical conditions, so it is imperative that clinicians provide prospective guidance about their probable postdischarge trajectory. If hospitals are not utilizing the LACE index, then perhaps using the B‐PREPARED, but not the CTM‐3, may be beneficial for predicting readmission.

How do our results fit with evidence from prior studies, and what do they mean in the context of care transitions quality? First, in the psychometric evaluation of the B‐PREPARED measure in a cohort of recently hospitalized patients, the mean score was 17.3, lower than the median of 21 in our cohort.[3] Numerous studies have utilized the CTM‐3 and the longer‐version CTM‐15. Though we cannot make a direct comparison, the median in our cohort (77.8) was on par with the means from other studies, which ranged from 63 to 82.[5, 17, 18, 19] Several studies also note ceiling effects with clusters of scores at the upper end of the scale, as did we. We conjecture that our cohort's preparedness scores may be higher because our institution has made concerted efforts to improve the discharge education for cardiovascular patients.

In a comparable patient population, the TRACE‐CORE (Transitions, Risks, and Actions in Coronary Events Center for Outcomes Research and Education) study is a cohort of more than 2200 patients with ACS who were administered the CTM‐15 within 1 month of discharge.[8] In that study, the median CTM‐15 score was 66.6, which is lower than our cohort. With regard to the predictive ability of the CTM‐3, they note that CTM‐3 scores did not differentiate between patients who were or were not readmitted or had emergency department visits. Our results support their concern that the CTM‐15 and by extension the CTM‐3, though adopted widely as part of HCAHPS, may not have sufficient ability to discriminate differences in patient outcomes or the quality of care transitions.

More recently, patient‐reported preparedness for discharge was assessed in a prospective cohort in Canada.[16] Lau et al. administered a single‐item measure of readiness at the time of discharge to general medicine patients, and found that lower readiness scores were also not associated with readmission or death at 30 days, when adjusted for the LACE index as we did.

We must acknowledge the limitations of our findings. First, our sample of recently discharged patients with cardiovascular disease is different than the community‐dwelling, underserved Americans hospitalized in the prior year, which served as the sample for reducing the CTM‐15 to 3 items.[5] This fact may explain why we did not find the CTM‐3 to be associated with readmission in our sample. Second, our analyses did not include extensive adjustment for patient‐related factors. Rather, our intention was to see how well the preparedness measures performed independently and compare their abilities to predict readmission, which is particularly relevant for clinicians who may not have all possible covariates in predicting readmission. Finally, because we limited the analyses to the patients who completed the B‐PREPARED and CTM‐3 measures (88% completion rate), we may not have data for: (1) very ill patients, who had a higher risk of readmission and least prepared, and were not able to answer the postdischarge phone call; and (2) very functional patients, who had a lower risk of readmission and were too busy to answer the postdischarge phone call. This may have limited the extremes in the spectrum of our sample.

Importantly, our study has several strengths. We report on the largest sample to date with results of both B‐PREPARED and CTM‐3. Moreover, we examined how these measures compared to a widely used readmission prediction tool, the LACE index. We had very high postdischarge phone call completion rates in the week following discharge. Furthermore, we had thorough assessment of readmission data through patient report, electronic medical record documentation, and collection of outside medical records.

Further research is needed to elucidate: (1) the ideal administration time of the patient‐reported measures of preparedness (before or after discharge), and (2) the challenges to the implementation of measures in healthcare systems. Remaining research questions center on the tradeoffs and barriers to implementing a longer measure like the 11‐item B‐PREPARED compared to a shorter measure like the CTM‐3. We do not know whether longer measures preclude their use by busy clinicians, though it provides more specific information about what patients feel they need at hospital discharge. Additionally, studies need to demonstrate the mutability of preparedness and the response of measures to interventions designed to improve the hospital discharge process.

In our sample of recently hospitalized cardiovascular patients, there was a statistically significant association between patient‐reported preparedness for discharged, as measured by B‐PREPARED, and readmissions/death at 30 and 90 days, but the magnitude of the association was very small. Furthermore, another patient‐reported preparedness measure, CTM‐3, was not associated with readmissions or death at either 30 or 90 days. Lastly, neither measure discriminated well between patients who were readmitted or not, and neither measure added meaningfully to the LACE index in terms of predicting 30‐ or 90‐day readmissions.

Disclosures

This study was supported by grant R01 HL109388 from the National Heart, Lung, and Blood Institute (Dr. Kripalani) and in part by grant UL1 RR024975‐01 from the National Center for Research Resources, and grant 2 UL1 TR000445‐06 from the National Center for Advancing Translational Sciences. Dr. Kripalani is a consultant to SAI Interactive and holds equity in Bioscape Digital, and is a consultant to and holds equity in PictureRx, LLC. Dr. Bell is supported by the National Institutes of Health (K23AG048347) and by the Eisenstein Women's Heart Fund. Dr. Vasilevskis is supported by the National Institutes of Health (K23AG040157) and the Geriatric Research, Education and Clinical Center. Dr. Mixon is a Veterans Affairs Health Services Research and Development Service Career Development awardee (12‐168) at the Nashville Department of Veterans Affairs. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funding agency was not involved in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. All authors had full access to all study data and had a significant role in writing the manuscript. The contents do not represent the views of the US Department of Veterans Affairs or the United States government. Dr. Kripalani is a consultant to and holds equity in PictureRx, LLC.

References
  1. Centers for Medicare 9(9):598603.
  2. Graumlich JF, Novotny NL, Aldag JC. Brief scale measuring patient preparedness for hospital discharge to home: psychometric properties. J Hosp Med. 2008;3(6):446454.
  3. Coleman EA, Mahoney E, Parry C. Assessing the quality of preparation for posthospital care from the patient's perspective: the care transitions measure. Med Care. 2005;43(3):246255.
  4. Parry C, Mahoney E, Chalmers SA, Coleman EA. Assessing the quality of transitional care: further applications of the care transitions measure. Med Care. 2008;46(3):317322.
  5. Coleman EA, Parry C, Chalmers SA, Chugh A, Mahoney E. The central role of performance measurement in improving the quality of transitional care. Home Health Care Serv Q. 2007;26(4):93104.
  6. Centers for Medicare 3:e001053.
  7. Kansagara D, Englander H, Salanitro AH, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  8. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  9. Wang H, Robinson RD, Johnson C, et al. Using the LACE index to predict hospital readmissions in congestive heart failure patients. BMC Cardiovasc Disord. 2014;14:97.
  10. Spiva L, Hand M, VanBrackle L, McVay F. Validation of a predictive model to identify patients at high risk for hospital readmission. J Healthc Qual. 2016;38(1):3441.
  11. Meyers AG, Salanitro A, Wallston KA, et al. Determinants of health after hospital discharge: rationale and design of the Vanderbilt Inpatient Cohort Study (VICS). BMC Health Serv Res. 2014;14:10.
  12. Coleman EA. CTM frequently asked questions. Available at: http://caretransitions.org/tools-and-resources/. Accessed January 22, 2016.
  13. Coleman EA. Instructions for scoring the CTM‐3. Available at: http://caretransitions.org/tools-and-resources/. Accessed January 22, 2016.
  14. Lau D, Padwal RS, Majumdar SR, et al. Patient‐reported discharge readiness and 30‐day risk of readmission or death: a prospective cohort study. Am J Med. 2016;129:8995.
  15. Parrish MM, O'Malley K, Adams RI, Adams SR, Coleman EA. Implementaiton of the Care Transitions Intervention: sustainability and lessons learned. Prof Case Manag. 2009;14(6):282293.
  16. Englander H, Michaels L, Chan B, Kansagara D. The care transitions innovation (C‐TraIn) for socioeconomically disadvantaged adults: results of a cluster randomized controlled trial. J Gen Intern Med. 2014;29(11):14601467.
  17. Record JD, Niranjan‐Azadi A, Christmas C, et al. Telephone calls to patients after discharge from the hospital: an important part of transitions of care. Med Educ Online. 2015;29(20):26701.
References
  1. Centers for Medicare 9(9):598603.
  2. Graumlich JF, Novotny NL, Aldag JC. Brief scale measuring patient preparedness for hospital discharge to home: psychometric properties. J Hosp Med. 2008;3(6):446454.
  3. Coleman EA, Mahoney E, Parry C. Assessing the quality of preparation for posthospital care from the patient's perspective: the care transitions measure. Med Care. 2005;43(3):246255.
  4. Parry C, Mahoney E, Chalmers SA, Coleman EA. Assessing the quality of transitional care: further applications of the care transitions measure. Med Care. 2008;46(3):317322.
  5. Coleman EA, Parry C, Chalmers SA, Chugh A, Mahoney E. The central role of performance measurement in improving the quality of transitional care. Home Health Care Serv Q. 2007;26(4):93104.
  6. Centers for Medicare 3:e001053.
  7. Kansagara D, Englander H, Salanitro AH, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  8. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  9. Wang H, Robinson RD, Johnson C, et al. Using the LACE index to predict hospital readmissions in congestive heart failure patients. BMC Cardiovasc Disord. 2014;14:97.
  10. Spiva L, Hand M, VanBrackle L, McVay F. Validation of a predictive model to identify patients at high risk for hospital readmission. J Healthc Qual. 2016;38(1):3441.
  11. Meyers AG, Salanitro A, Wallston KA, et al. Determinants of health after hospital discharge: rationale and design of the Vanderbilt Inpatient Cohort Study (VICS). BMC Health Serv Res. 2014;14:10.
  12. Coleman EA. CTM frequently asked questions. Available at: http://caretransitions.org/tools-and-resources/. Accessed January 22, 2016.
  13. Coleman EA. Instructions for scoring the CTM‐3. Available at: http://caretransitions.org/tools-and-resources/. Accessed January 22, 2016.
  14. Lau D, Padwal RS, Majumdar SR, et al. Patient‐reported discharge readiness and 30‐day risk of readmission or death: a prospective cohort study. Am J Med. 2016;129:8995.
  15. Parrish MM, O'Malley K, Adams RI, Adams SR, Coleman EA. Implementaiton of the Care Transitions Intervention: sustainability and lessons learned. Prof Case Manag. 2009;14(6):282293.
  16. Englander H, Michaels L, Chan B, Kansagara D. The care transitions innovation (C‐TraIn) for socioeconomically disadvantaged adults: results of a cluster randomized controlled trial. J Gen Intern Med. 2014;29(11):14601467.
  17. Record JD, Niranjan‐Azadi A, Christmas C, et al. Telephone calls to patients after discharge from the hospital: an important part of transitions of care. Med Educ Online. 2015;29(20):26701.
Issue
Journal of Hospital Medicine - 11(9)
Issue
Journal of Hospital Medicine - 11(9)
Page Number
603-609
Page Number
603-609
Article Type
Display Headline
Preparedness for hospital discharge and prediction of readmission
Display Headline
Preparedness for hospital discharge and prediction of readmission
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Amanda S. Mixon, MD, Suite 6000 MCE, North Tower, 1215 21st Avenue South, Nashville, TN 37232; Telephone: 615‐936‐3710; Fax: 615‐936‐1269; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Does caffeine intake during pregnancy affect birth weight?

Article Type
Changed
Mon, 01/14/2019 - 14:06
Display Headline
Does caffeine intake during pregnancy affect birth weight?
EVIDENCE-BASED ANSWER:

No. Reducing caffeinated coffee consumption by 180 mg of caffeine (the equivalent of 2 cups) per day after 16 weeks’ gestation doesn’t affect birth weight. Consuming more than 300 mg of caffeine per day is associated with a clinically trivial, and statistically insignificant (less than 1 ounce), reduction in birth weight, compared with consuming no caffeine (strength of recommendation: B, randomized controlled trial [RCT] and large prospective cohort study).

 

EVIDENCE SUMMARY

A Cochrane systematic review of the effects of caffeine on pregnancy identified 2 studies, only one of which addressed the question of maternal caffeine intake and infant birth weight.1 The double-blind RCT evaluating caffeine intake during pregnancy found no significant differences in birth weight or length of gestation between women who drank regular coffee and women who drank decaffeinated coffee.2

At 16 weeks’ gestation, investigators randomized 1207 pregnant women who reported daily intake of at least 3 cups of regular coffee to drink unlabeled instant coffee (which was either regular or decaffeinated) for the rest of their pregnancy. The women were allowed to request as much of their assigned instant coffee as they wanted.

Subjects were recruited from among all women with uncomplicated, singleton pregnancies who were expected to deliver at a Danish university hospital during the study period. Investigators interviewed the women at 20, 25, and 34 weeks to determine coffee consumption (including both coffee provided by the investigators and other coffee), consumption of other caffeinated beverages, and smoking status.

The difference in caffeine intake between the groups didn’t correspond to significant differences in birth weight (16 g lighter with caffeinated coffee; 95% confidence interval [CI], −40 g to 73 g; P=.48) or birth length (0.03 cm longer with caffeinated coffee; 95% CI, −0.29 to 0.22) among infants born to the 1150 women who completed the study.

Limitations of the study include randomizing women after 16 weeks’ gestation and the observation that many women correctly guessed which type of coffee they received (35% of women drinking caffeinated coffee and 49% of women drinking decaf).

A caffeine effect, but with study limitations

The Cochrane systematic review (described above) and a meta-analysis of 9 prospective cohort studies with a total of 90,000 patients that evaluated maternal caffeine intake found that it was associated with increased low birth weight, intrauterine growth restriction (IUGR), or small for gestational age (SGA) infants.3

Researchers assessed caffeine consumption from coffee or other sources either by questionnaire (5 studies) or interview (4 studies) at various times during pregnancy, mostly in the first or second trimester, and assigned subjects to 4 intake categories: none, low (50-149 mg/d), moderate (150-349 mg/d), and high (>350 mg/d).

Compared with no caffeine, all levels of caffeine intake were associated with increased rates of low birth weight, IUGR, or SGA (low intake: relative risk [RR]=1.13; 95% CI, 1.06-1.21; moderate intake: RR=1.38; 95% CI, 1.18-1.62; high intake: RR=1.60; 95% CI, 1.24-2.08).

A major limitation of the meta-analysis was that 8 of the included studies were identified by the reviewers as having quality problems. The reviewers also identified additional cohort studies, not included in the meta-analysis, which failed to show any association between caffeine intake and poor pregnancy outcomes.

 

 

Results of best-quality study prove clinically trivial

The best-quality prospective cohort study in the review described above was also the largest, comprising two-thirds of the total patients. It found a statistically significant, but clinically trivial, association between caffeine intake and birth weight.4

Investigators from Norway’s Institute of Public Health mailed surveys to 106,707 pregnant Norwegian women and recruited 59,123 with uncomplicated singleton pregnancies. The survey assessed diet and lifestyle at several stages of pregnancy and correlated caffeine intake with birth weight, gestational length, and SGA deliveries. Investigators calculated caffeine intake from coffee and other dietary sources (tea and chocolate).

Higher caffeine intake was associated with a small reduction in birth weight (8 g/100 mg/d of additional caffeine intake; 95% CI, −10 to −6 g/100 mg/d). Higher intake was also associated with increasing likelihood of SGA birth, a finding of borderline significance (odds ratio [OR]=1.18; 95% CI, 1.00-1.38, comparing intake <50 mg/d with 51-200 mg/d; OR=1.62; 95% CI, 1.26-2.29, comparing <50 mg/d with 201-300 mg/d; and OR=1.62; 95% CI, 1.15-2.29, comparing <50 mg/d with >300 mg/d).

References

1. Jahanfar S, Jaafar SH. Effects of restricted caffeine intake by mother on fetal, neonatal and pregnancy outcome. Cochrane Database Syst Rev. 2013;(2):CD006965.

2. Bech BH, Obel C, Henriksen TB, et al. Effect of reducing caffeine intake on birth weight and length of gestation: randomised controlled trial. BMJ. 2007;334:409.

3. Chen LW, Wu Y, Neelakantan N, et al. Maternal caffeine intake during pregnancy is associated with risk of low birth weight: a systematic review and dose-response meta-analysis. BMC Medicine. 2014;12:174-176.

4. Sengpiel V, Elind E, Bacelis J, et al. Maternal caffeine intake during pregnancy is associated with birth weight but not with gestational length: results form a large prospective observational cohort trial. BMC Medicine. 2013;11:42.

Article PDF
Author and Disclosure Information

Taralee Adams, DO
Gary Kelsberg, MD

University of Washington Family Medicine Residency at Valley Medical Center, Renton

Sarah Safranek, MLIS
University of Washington Health Sciences Library, Seattle

DEPUTY EDITOR
Jon Neher, MD

University of Washington Family Medicine Residency at Valley Medical Center, Renton

Issue
The Journal of Family Practice - 65(3)
Publications
Topics
Page Number
205,213
Legacy Keywords
Taralee Adams, DO, Gary Kelsberg, MD, Sarah Safranek, MLIS, pediatrics, women's health, caffeine, coffee, coffee intake, birth, infant, gynecology, obstetrics
Sections
Author and Disclosure Information

Taralee Adams, DO
Gary Kelsberg, MD

University of Washington Family Medicine Residency at Valley Medical Center, Renton

Sarah Safranek, MLIS
University of Washington Health Sciences Library, Seattle

DEPUTY EDITOR
Jon Neher, MD

University of Washington Family Medicine Residency at Valley Medical Center, Renton

Author and Disclosure Information

Taralee Adams, DO
Gary Kelsberg, MD

University of Washington Family Medicine Residency at Valley Medical Center, Renton

Sarah Safranek, MLIS
University of Washington Health Sciences Library, Seattle

DEPUTY EDITOR
Jon Neher, MD

University of Washington Family Medicine Residency at Valley Medical Center, Renton

Article PDF
Article PDF
EVIDENCE-BASED ANSWER:

No. Reducing caffeinated coffee consumption by 180 mg of caffeine (the equivalent of 2 cups) per day after 16 weeks’ gestation doesn’t affect birth weight. Consuming more than 300 mg of caffeine per day is associated with a clinically trivial, and statistically insignificant (less than 1 ounce), reduction in birth weight, compared with consuming no caffeine (strength of recommendation: B, randomized controlled trial [RCT] and large prospective cohort study).

 

EVIDENCE SUMMARY

A Cochrane systematic review of the effects of caffeine on pregnancy identified 2 studies, only one of which addressed the question of maternal caffeine intake and infant birth weight.1 The double-blind RCT evaluating caffeine intake during pregnancy found no significant differences in birth weight or length of gestation between women who drank regular coffee and women who drank decaffeinated coffee.2

At 16 weeks’ gestation, investigators randomized 1207 pregnant women who reported daily intake of at least 3 cups of regular coffee to drink unlabeled instant coffee (which was either regular or decaffeinated) for the rest of their pregnancy. The women were allowed to request as much of their assigned instant coffee as they wanted.

Subjects were recruited from among all women with uncomplicated, singleton pregnancies who were expected to deliver at a Danish university hospital during the study period. Investigators interviewed the women at 20, 25, and 34 weeks to determine coffee consumption (including both coffee provided by the investigators and other coffee), consumption of other caffeinated beverages, and smoking status.

The difference in caffeine intake between the groups didn’t correspond to significant differences in birth weight (16 g lighter with caffeinated coffee; 95% confidence interval [CI], −40 g to 73 g; P=.48) or birth length (0.03 cm longer with caffeinated coffee; 95% CI, −0.29 to 0.22) among infants born to the 1150 women who completed the study.

Limitations of the study include randomizing women after 16 weeks’ gestation and the observation that many women correctly guessed which type of coffee they received (35% of women drinking caffeinated coffee and 49% of women drinking decaf).

A caffeine effect, but with study limitations

The Cochrane systematic review (described above) and a meta-analysis of 9 prospective cohort studies with a total of 90,000 patients that evaluated maternal caffeine intake found that it was associated with increased low birth weight, intrauterine growth restriction (IUGR), or small for gestational age (SGA) infants.3

Researchers assessed caffeine consumption from coffee or other sources either by questionnaire (5 studies) or interview (4 studies) at various times during pregnancy, mostly in the first or second trimester, and assigned subjects to 4 intake categories: none, low (50-149 mg/d), moderate (150-349 mg/d), and high (>350 mg/d).

Compared with no caffeine, all levels of caffeine intake were associated with increased rates of low birth weight, IUGR, or SGA (low intake: relative risk [RR]=1.13; 95% CI, 1.06-1.21; moderate intake: RR=1.38; 95% CI, 1.18-1.62; high intake: RR=1.60; 95% CI, 1.24-2.08).

A major limitation of the meta-analysis was that 8 of the included studies were identified by the reviewers as having quality problems. The reviewers also identified additional cohort studies, not included in the meta-analysis, which failed to show any association between caffeine intake and poor pregnancy outcomes.

 

 

Results of best-quality study prove clinically trivial

The best-quality prospective cohort study in the review described above was also the largest, comprising two-thirds of the total patients. It found a statistically significant, but clinically trivial, association between caffeine intake and birth weight.4

Investigators from Norway’s Institute of Public Health mailed surveys to 106,707 pregnant Norwegian women and recruited 59,123 with uncomplicated singleton pregnancies. The survey assessed diet and lifestyle at several stages of pregnancy and correlated caffeine intake with birth weight, gestational length, and SGA deliveries. Investigators calculated caffeine intake from coffee and other dietary sources (tea and chocolate).

Higher caffeine intake was associated with a small reduction in birth weight (8 g/100 mg/d of additional caffeine intake; 95% CI, −10 to −6 g/100 mg/d). Higher intake was also associated with increasing likelihood of SGA birth, a finding of borderline significance (odds ratio [OR]=1.18; 95% CI, 1.00-1.38, comparing intake <50 mg/d with 51-200 mg/d; OR=1.62; 95% CI, 1.26-2.29, comparing <50 mg/d with 201-300 mg/d; and OR=1.62; 95% CI, 1.15-2.29, comparing <50 mg/d with >300 mg/d).

EVIDENCE-BASED ANSWER:

No. Reducing caffeinated coffee consumption by 180 mg of caffeine (the equivalent of 2 cups) per day after 16 weeks’ gestation doesn’t affect birth weight. Consuming more than 300 mg of caffeine per day is associated with a clinically trivial, and statistically insignificant (less than 1 ounce), reduction in birth weight, compared with consuming no caffeine (strength of recommendation: B, randomized controlled trial [RCT] and large prospective cohort study).

 

EVIDENCE SUMMARY

A Cochrane systematic review of the effects of caffeine on pregnancy identified 2 studies, only one of which addressed the question of maternal caffeine intake and infant birth weight.1 The double-blind RCT evaluating caffeine intake during pregnancy found no significant differences in birth weight or length of gestation between women who drank regular coffee and women who drank decaffeinated coffee.2

At 16 weeks’ gestation, investigators randomized 1207 pregnant women who reported daily intake of at least 3 cups of regular coffee to drink unlabeled instant coffee (which was either regular or decaffeinated) for the rest of their pregnancy. The women were allowed to request as much of their assigned instant coffee as they wanted.

Subjects were recruited from among all women with uncomplicated, singleton pregnancies who were expected to deliver at a Danish university hospital during the study period. Investigators interviewed the women at 20, 25, and 34 weeks to determine coffee consumption (including both coffee provided by the investigators and other coffee), consumption of other caffeinated beverages, and smoking status.

The difference in caffeine intake between the groups didn’t correspond to significant differences in birth weight (16 g lighter with caffeinated coffee; 95% confidence interval [CI], −40 g to 73 g; P=.48) or birth length (0.03 cm longer with caffeinated coffee; 95% CI, −0.29 to 0.22) among infants born to the 1150 women who completed the study.

Limitations of the study include randomizing women after 16 weeks’ gestation and the observation that many women correctly guessed which type of coffee they received (35% of women drinking caffeinated coffee and 49% of women drinking decaf).

A caffeine effect, but with study limitations

The Cochrane systematic review (described above) and a meta-analysis of 9 prospective cohort studies with a total of 90,000 patients that evaluated maternal caffeine intake found that it was associated with increased low birth weight, intrauterine growth restriction (IUGR), or small for gestational age (SGA) infants.3

Researchers assessed caffeine consumption from coffee or other sources either by questionnaire (5 studies) or interview (4 studies) at various times during pregnancy, mostly in the first or second trimester, and assigned subjects to 4 intake categories: none, low (50-149 mg/d), moderate (150-349 mg/d), and high (>350 mg/d).

Compared with no caffeine, all levels of caffeine intake were associated with increased rates of low birth weight, IUGR, or SGA (low intake: relative risk [RR]=1.13; 95% CI, 1.06-1.21; moderate intake: RR=1.38; 95% CI, 1.18-1.62; high intake: RR=1.60; 95% CI, 1.24-2.08).

A major limitation of the meta-analysis was that 8 of the included studies were identified by the reviewers as having quality problems. The reviewers also identified additional cohort studies, not included in the meta-analysis, which failed to show any association between caffeine intake and poor pregnancy outcomes.

 

 

Results of best-quality study prove clinically trivial

The best-quality prospective cohort study in the review described above was also the largest, comprising two-thirds of the total patients. It found a statistically significant, but clinically trivial, association between caffeine intake and birth weight.4

Investigators from Norway’s Institute of Public Health mailed surveys to 106,707 pregnant Norwegian women and recruited 59,123 with uncomplicated singleton pregnancies. The survey assessed diet and lifestyle at several stages of pregnancy and correlated caffeine intake with birth weight, gestational length, and SGA deliveries. Investigators calculated caffeine intake from coffee and other dietary sources (tea and chocolate).

Higher caffeine intake was associated with a small reduction in birth weight (8 g/100 mg/d of additional caffeine intake; 95% CI, −10 to −6 g/100 mg/d). Higher intake was also associated with increasing likelihood of SGA birth, a finding of borderline significance (odds ratio [OR]=1.18; 95% CI, 1.00-1.38, comparing intake <50 mg/d with 51-200 mg/d; OR=1.62; 95% CI, 1.26-2.29, comparing <50 mg/d with 201-300 mg/d; and OR=1.62; 95% CI, 1.15-2.29, comparing <50 mg/d with >300 mg/d).

References

1. Jahanfar S, Jaafar SH. Effects of restricted caffeine intake by mother on fetal, neonatal and pregnancy outcome. Cochrane Database Syst Rev. 2013;(2):CD006965.

2. Bech BH, Obel C, Henriksen TB, et al. Effect of reducing caffeine intake on birth weight and length of gestation: randomised controlled trial. BMJ. 2007;334:409.

3. Chen LW, Wu Y, Neelakantan N, et al. Maternal caffeine intake during pregnancy is associated with risk of low birth weight: a systematic review and dose-response meta-analysis. BMC Medicine. 2014;12:174-176.

4. Sengpiel V, Elind E, Bacelis J, et al. Maternal caffeine intake during pregnancy is associated with birth weight but not with gestational length: results form a large prospective observational cohort trial. BMC Medicine. 2013;11:42.

References

1. Jahanfar S, Jaafar SH. Effects of restricted caffeine intake by mother on fetal, neonatal and pregnancy outcome. Cochrane Database Syst Rev. 2013;(2):CD006965.

2. Bech BH, Obel C, Henriksen TB, et al. Effect of reducing caffeine intake on birth weight and length of gestation: randomised controlled trial. BMJ. 2007;334:409.

3. Chen LW, Wu Y, Neelakantan N, et al. Maternal caffeine intake during pregnancy is associated with risk of low birth weight: a systematic review and dose-response meta-analysis. BMC Medicine. 2014;12:174-176.

4. Sengpiel V, Elind E, Bacelis J, et al. Maternal caffeine intake during pregnancy is associated with birth weight but not with gestational length: results form a large prospective observational cohort trial. BMC Medicine. 2013;11:42.

Issue
The Journal of Family Practice - 65(3)
Issue
The Journal of Family Practice - 65(3)
Page Number
205,213
Page Number
205,213
Publications
Publications
Topics
Article Type
Display Headline
Does caffeine intake during pregnancy affect birth weight?
Display Headline
Does caffeine intake during pregnancy affect birth weight?
Legacy Keywords
Taralee Adams, DO, Gary Kelsberg, MD, Sarah Safranek, MLIS, pediatrics, women's health, caffeine, coffee, coffee intake, birth, infant, gynecology, obstetrics
Legacy Keywords
Taralee Adams, DO, Gary Kelsberg, MD, Sarah Safranek, MLIS, pediatrics, women's health, caffeine, coffee, coffee intake, birth, infant, gynecology, obstetrics
Sections
PURLs Copyright

Evidence-based answers from the Family Physicians Inquiries Network

Disallow All Ads
Article PDF Media

Do corticosteroids relieve Bell’s palsy?

Article Type
Changed
Mon, 01/14/2019 - 14:06
Display Headline
Do corticosteroids relieve Bell’s palsy?
EVIDENCE-BASED ANSWER:

Yes, but not severe disease. Corticosteroids likely improve facial motor function in adults with mild to moderate Bell’s palsy (strength of recommendation [SOR]: B, meta-analysis of heterogeneous randomized controlled trials [RCTs]). Corticosteroids are probably ineffective in treating cosmetically disabling or severe disease (SOR: A, meta-analysis and large RCT).

 

Improvement seen with corticosteroids in mild to moderate palsy

A 2010 Cochrane review of 8 RCTs (7 double-blind) compared corticosteroids with placebo in 1569 patients with Bell’s palsy, 24 months to 84 years of age.1 The definition of mild and moderate severity of symptoms differed across studies, as did corticosteroid doses. Only 6 trials required initiation of therapy within 3 days.

More patients in the corticosteroid group had completely recovered facial motor function at 6 months than patients taking placebo (77% vs 65%; 7 trials, 1507 patients; relative risk [RR]=0.71; 95% confidence interval [CI], 0.61-0.81; number needed to treat=10). Improvement in cosmetically disabling or severe disease wasn’t significant (5 trials, 668 patients; RR=0.97; 95% CI, 0.44-2.2).

Prednisolone with and without an antiviral reduces facial weakness

A 2012 prospective, randomized, double-blind, placebo-controlled, multicenter trial evaluated prednisolone (60 mg/day for 5 days, tapered for 5 days) in 829 adults, 18 to 75 years of age.2 Patients were randomized to one of 4 groups: placebo plus placebo, prednisolone plus placebo, valacyclovir plus placebo, and prednisolone plus valacyclovir. Facial function was assessed over 12 months using the Sunnybrook grading system (scored from 0 to 100; 0=complete paralysis, 100=normal function).

Compared to the groups not receiving any prednisolone, the 2 groups that received prednisolone, either with placebo or valacyclovir, had significantly less facial weakness at 12 months for both mild and moderate palsy (Sunnybrook scores <90: 184 patients; difference= −10.3%; 95% CI, −15.9 to −4.7; P<.001; Sunnybrook score <80: 134 patients; difference= −6.9%; 95% CI, −11.9 to −1.9; P=.01; Sunnybrook score <70: 98 patients; difference= −7.8%; 95% CI, −12.1 to −3.4; P<.001). Patients with severe disease (Sunnybrook score <50) didn’t show significant improvement (56 patients; difference= –2.9%; CI, −6.4 to 0.5; P=.10).

 

 

Guideline recommends corticosteroids for Bell’s palsy

The 2014 American Academy of Neurology evidence-based guideline reviewed all studies of the use of steroids in Bell’s palsy published after the original 2001 guideline.3 They found 2 high-quality RCTs, both of which are included in the 2010 Cochrane review. The 2014 guideline recommends corticosteroids for every patient who develops Bell’s palsy unless a medical contraindication exists (2 Class 1 studies [RCTs], Level A [must prescribe or offer]).

References

1. Salinas RA, Alvarez G, Daly F, et al. Corticosteroids for Bell’s palsy (idiopathic facial paralysis). Cochrane Database Syst Rev. 2010;(3):CD001942.

2. Berg T, Bylund N, Marsk E, et al. The effect of prednisolone on sequelae in Bell’s palsy. Arch Otolaryngol Head Neck Surg. 2012;138:445-449.

3. Gronseth G, Paduga R, American Academy of Neurology. Evidence-based guideline update: steroids and antivirals for Bell’s palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79:2209-2213.

Article PDF
Author and Disclosure Information

Kathy Soch, MD
David Purtle, MD
Mary Ara, MD
Kimberly Dabbs, DO

Corpus Christi Family Medicine Residency Program, Corpus Christi, Tex

EDITOR
Corey Lyon, DO

University of Colorado Family Medicine Residency, Denver

Issue
The Journal of Family Practice - 65(3)
Publications
Topics
Page Number
E1-E2
Legacy Keywords
Kathy Soch, MD, David Purtle, MD, Mary Ara, MD, Kimberly Dabbs, DO, pharmacology, rare diseases, Bell's palsy, corticosteroids
Sections
Author and Disclosure Information

Kathy Soch, MD
David Purtle, MD
Mary Ara, MD
Kimberly Dabbs, DO

Corpus Christi Family Medicine Residency Program, Corpus Christi, Tex

EDITOR
Corey Lyon, DO

University of Colorado Family Medicine Residency, Denver

Author and Disclosure Information

Kathy Soch, MD
David Purtle, MD
Mary Ara, MD
Kimberly Dabbs, DO

Corpus Christi Family Medicine Residency Program, Corpus Christi, Tex

EDITOR
Corey Lyon, DO

University of Colorado Family Medicine Residency, Denver

Article PDF
Article PDF
EVIDENCE-BASED ANSWER:

Yes, but not severe disease. Corticosteroids likely improve facial motor function in adults with mild to moderate Bell’s palsy (strength of recommendation [SOR]: B, meta-analysis of heterogeneous randomized controlled trials [RCTs]). Corticosteroids are probably ineffective in treating cosmetically disabling or severe disease (SOR: A, meta-analysis and large RCT).

 

Improvement seen with corticosteroids in mild to moderate palsy

A 2010 Cochrane review of 8 RCTs (7 double-blind) compared corticosteroids with placebo in 1569 patients with Bell’s palsy, 24 months to 84 years of age.1 The definition of mild and moderate severity of symptoms differed across studies, as did corticosteroid doses. Only 6 trials required initiation of therapy within 3 days.

More patients in the corticosteroid group had completely recovered facial motor function at 6 months than patients taking placebo (77% vs 65%; 7 trials, 1507 patients; relative risk [RR]=0.71; 95% confidence interval [CI], 0.61-0.81; number needed to treat=10). Improvement in cosmetically disabling or severe disease wasn’t significant (5 trials, 668 patients; RR=0.97; 95% CI, 0.44-2.2).

Prednisolone with and without an antiviral reduces facial weakness

A 2012 prospective, randomized, double-blind, placebo-controlled, multicenter trial evaluated prednisolone (60 mg/day for 5 days, tapered for 5 days) in 829 adults, 18 to 75 years of age.2 Patients were randomized to one of 4 groups: placebo plus placebo, prednisolone plus placebo, valacyclovir plus placebo, and prednisolone plus valacyclovir. Facial function was assessed over 12 months using the Sunnybrook grading system (scored from 0 to 100; 0=complete paralysis, 100=normal function).

Compared to the groups not receiving any prednisolone, the 2 groups that received prednisolone, either with placebo or valacyclovir, had significantly less facial weakness at 12 months for both mild and moderate palsy (Sunnybrook scores <90: 184 patients; difference= −10.3%; 95% CI, −15.9 to −4.7; P<.001; Sunnybrook score <80: 134 patients; difference= −6.9%; 95% CI, −11.9 to −1.9; P=.01; Sunnybrook score <70: 98 patients; difference= −7.8%; 95% CI, −12.1 to −3.4; P<.001). Patients with severe disease (Sunnybrook score <50) didn’t show significant improvement (56 patients; difference= –2.9%; CI, −6.4 to 0.5; P=.10).

 

 

Guideline recommends corticosteroids for Bell’s palsy

The 2014 American Academy of Neurology evidence-based guideline reviewed all studies of the use of steroids in Bell’s palsy published after the original 2001 guideline.3 They found 2 high-quality RCTs, both of which are included in the 2010 Cochrane review. The 2014 guideline recommends corticosteroids for every patient who develops Bell’s palsy unless a medical contraindication exists (2 Class 1 studies [RCTs], Level A [must prescribe or offer]).

EVIDENCE-BASED ANSWER:

Yes, but not severe disease. Corticosteroids likely improve facial motor function in adults with mild to moderate Bell’s palsy (strength of recommendation [SOR]: B, meta-analysis of heterogeneous randomized controlled trials [RCTs]). Corticosteroids are probably ineffective in treating cosmetically disabling or severe disease (SOR: A, meta-analysis and large RCT).

 

Improvement seen with corticosteroids in mild to moderate palsy

A 2010 Cochrane review of 8 RCTs (7 double-blind) compared corticosteroids with placebo in 1569 patients with Bell’s palsy, 24 months to 84 years of age.1 The definition of mild and moderate severity of symptoms differed across studies, as did corticosteroid doses. Only 6 trials required initiation of therapy within 3 days.

More patients in the corticosteroid group had completely recovered facial motor function at 6 months than patients taking placebo (77% vs 65%; 7 trials, 1507 patients; relative risk [RR]=0.71; 95% confidence interval [CI], 0.61-0.81; number needed to treat=10). Improvement in cosmetically disabling or severe disease wasn’t significant (5 trials, 668 patients; RR=0.97; 95% CI, 0.44-2.2).

Prednisolone with and without an antiviral reduces facial weakness

A 2012 prospective, randomized, double-blind, placebo-controlled, multicenter trial evaluated prednisolone (60 mg/day for 5 days, tapered for 5 days) in 829 adults, 18 to 75 years of age.2 Patients were randomized to one of 4 groups: placebo plus placebo, prednisolone plus placebo, valacyclovir plus placebo, and prednisolone plus valacyclovir. Facial function was assessed over 12 months using the Sunnybrook grading system (scored from 0 to 100; 0=complete paralysis, 100=normal function).

Compared to the groups not receiving any prednisolone, the 2 groups that received prednisolone, either with placebo or valacyclovir, had significantly less facial weakness at 12 months for both mild and moderate palsy (Sunnybrook scores <90: 184 patients; difference= −10.3%; 95% CI, −15.9 to −4.7; P<.001; Sunnybrook score <80: 134 patients; difference= −6.9%; 95% CI, −11.9 to −1.9; P=.01; Sunnybrook score <70: 98 patients; difference= −7.8%; 95% CI, −12.1 to −3.4; P<.001). Patients with severe disease (Sunnybrook score <50) didn’t show significant improvement (56 patients; difference= –2.9%; CI, −6.4 to 0.5; P=.10).

 

 

Guideline recommends corticosteroids for Bell’s palsy

The 2014 American Academy of Neurology evidence-based guideline reviewed all studies of the use of steroids in Bell’s palsy published after the original 2001 guideline.3 They found 2 high-quality RCTs, both of which are included in the 2010 Cochrane review. The 2014 guideline recommends corticosteroids for every patient who develops Bell’s palsy unless a medical contraindication exists (2 Class 1 studies [RCTs], Level A [must prescribe or offer]).

References

1. Salinas RA, Alvarez G, Daly F, et al. Corticosteroids for Bell’s palsy (idiopathic facial paralysis). Cochrane Database Syst Rev. 2010;(3):CD001942.

2. Berg T, Bylund N, Marsk E, et al. The effect of prednisolone on sequelae in Bell’s palsy. Arch Otolaryngol Head Neck Surg. 2012;138:445-449.

3. Gronseth G, Paduga R, American Academy of Neurology. Evidence-based guideline update: steroids and antivirals for Bell’s palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79:2209-2213.

References

1. Salinas RA, Alvarez G, Daly F, et al. Corticosteroids for Bell’s palsy (idiopathic facial paralysis). Cochrane Database Syst Rev. 2010;(3):CD001942.

2. Berg T, Bylund N, Marsk E, et al. The effect of prednisolone on sequelae in Bell’s palsy. Arch Otolaryngol Head Neck Surg. 2012;138:445-449.

3. Gronseth G, Paduga R, American Academy of Neurology. Evidence-based guideline update: steroids and antivirals for Bell’s palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79:2209-2213.

Issue
The Journal of Family Practice - 65(3)
Issue
The Journal of Family Practice - 65(3)
Page Number
E1-E2
Page Number
E1-E2
Publications
Publications
Topics
Article Type
Display Headline
Do corticosteroids relieve Bell’s palsy?
Display Headline
Do corticosteroids relieve Bell’s palsy?
Legacy Keywords
Kathy Soch, MD, David Purtle, MD, Mary Ara, MD, Kimberly Dabbs, DO, pharmacology, rare diseases, Bell's palsy, corticosteroids
Legacy Keywords
Kathy Soch, MD, David Purtle, MD, Mary Ara, MD, Kimberly Dabbs, DO, pharmacology, rare diseases, Bell's palsy, corticosteroids
Sections
PURLs Copyright

Evidence-based answers from the Family Physicians Inquiries Network

Disallow All Ads
Article PDF Media

Stem cell therapy for MS: Steady progress but not ready for general use

Article Type
Changed
Wed, 01/16/2019 - 15:45
Display Headline
Stem cell therapy for MS: Steady progress but not ready for general use

NEW ORLEANS – Stem cell-mediated functional regeneration continues to attract interest in the treatment of multiple sclerosis (MS). The reality, however, is daunting.

“Several types of cell-based therapeutic strategies are under investigation, with different risks, benefits, and goals. Some of these strategies show promise but significant methodological questions need to be answered,” Dr. Andrew D. Goodman said at a meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

Dr. Andrew Goodman

The present reality is that stem cell transplantation is not yet ready for general use to treat MS. Yet, the possible benefits of the approach demand further exploration, including clinical trials, according to Dr. Goodman, professor of neurology, chief of the neuroimmunology unit, and director of the multiple sclerosis center at the University of Rochester (N.Y.).

MS stem cell therapy hinges on the pluripotent nature of stem cells, particularly mesenchymal stem cells (MSCs) and hematopoietic stem cells. MS therapy would involve regeneration of nerve cell myelin in the brain and/or spinal cord and possibly suppression of inflammation.

Autologous HSC transplantation

Immunoablation followed by the autologous HSC transplantation (HSCT) has been explored in the ASTIMS phase II randomized trials (Neurology. Mar 10;84[10]:981-8 and HALT-MS, JAMA Neurol. 2015 Feb;72[2]:159-69), and in a Northwestern University case series (JAMA. 2015 Jan 20;313[3]:275-84).

ASTIMS compared high-dose chemotherapy followed by autologous HSCT with mitoxantrone (which is no longer used). Only 22% percent of patients had relapsing-remitting MS (RRMS; the one where HSCT generally works) and 78% had primary progressive MS (where HSCT generally does not work well or is not optimal). Yet, HSCT worked, with new T2 lesions reduced by 79%. No difference in disability progression was evident. Interim (3-year) results of HALT-MS were encouraging, with sustained remission of active RRMS and improved neurologic function. The Northwestern case series also documented improvements in neurologic disability and other clinical outcomes.

“The available data suggest that immunoablation and HSCT is highly effective in active RRMS. Patients most likely to benefit are young and still ambulatory with a relatively recent disease onset featuring highly active MS with MRI lesion activity and continued activity despite first- and second-line agents,” Dr. Goodman said.

While encouraging, the small patient numbers of the two trials and uncontrolled nature of the case series prevent conclusions concerning the therapeutic use of autologous HSCT in RRMS. Furthermore, risks of the approach include MS relapse, treatment-related adverse effects, adverse effects due to myelosuppression and immunoablation, and secondary autoimmune disorders that may arise at a later time.

Mesenchymal stem cell transplantation

MSCs offer the advantages of a variety of sources in adult tissue, established methods of culture, and either local or peripheral administration. Their finite capacity for proliferation is a drawback. Studies to date of MSC transplantation in MS have involved about 100 patients, so it is much too early to consider MSC use. Even if therapy is contemplated, whether it should be directed at quelling inflammation or to promote repair is undecided. As well, cell production and delivery issues need to be addressed, Dr. Goodman said.

Human oligodendrocyte progenitor cell transplantation

The implantation of CD 140a+ cell populations containing human oligodendrocyte progenitor cells (hOPCs) into the cerebral hemispheres of patients with non-relapsing secondary progressive MS as a means of stabilizing or improving neurological function is being planned. The NYSTEM project, with Dr. Goodman as a lead investigator, will first seek to identify the maximum tolerated dose of hOPCs.

The study is planned with the knowledge of safety issues that include cancer tumorigenesis. Yet exploration of the possible benefits will be evident only by testing in humans. “I don’t know of any way to find out except by trying,” Dr. Goodman said.

Dr. Goodman disclosed receiving research support and/or serving as a consultant to Avanir, Teva, Genzyme/Sanofi, Sun Pharma, Ono, Roche, AbbVie, Biogen, Novartis, Acorda, Purdue, and EMD Serono.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event
Related Articles

NEW ORLEANS – Stem cell-mediated functional regeneration continues to attract interest in the treatment of multiple sclerosis (MS). The reality, however, is daunting.

“Several types of cell-based therapeutic strategies are under investigation, with different risks, benefits, and goals. Some of these strategies show promise but significant methodological questions need to be answered,” Dr. Andrew D. Goodman said at a meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

Dr. Andrew Goodman

The present reality is that stem cell transplantation is not yet ready for general use to treat MS. Yet, the possible benefits of the approach demand further exploration, including clinical trials, according to Dr. Goodman, professor of neurology, chief of the neuroimmunology unit, and director of the multiple sclerosis center at the University of Rochester (N.Y.).

MS stem cell therapy hinges on the pluripotent nature of stem cells, particularly mesenchymal stem cells (MSCs) and hematopoietic stem cells. MS therapy would involve regeneration of nerve cell myelin in the brain and/or spinal cord and possibly suppression of inflammation.

Autologous HSC transplantation

Immunoablation followed by the autologous HSC transplantation (HSCT) has been explored in the ASTIMS phase II randomized trials (Neurology. Mar 10;84[10]:981-8 and HALT-MS, JAMA Neurol. 2015 Feb;72[2]:159-69), and in a Northwestern University case series (JAMA. 2015 Jan 20;313[3]:275-84).

ASTIMS compared high-dose chemotherapy followed by autologous HSCT with mitoxantrone (which is no longer used). Only 22% percent of patients had relapsing-remitting MS (RRMS; the one where HSCT generally works) and 78% had primary progressive MS (where HSCT generally does not work well or is not optimal). Yet, HSCT worked, with new T2 lesions reduced by 79%. No difference in disability progression was evident. Interim (3-year) results of HALT-MS were encouraging, with sustained remission of active RRMS and improved neurologic function. The Northwestern case series also documented improvements in neurologic disability and other clinical outcomes.

“The available data suggest that immunoablation and HSCT is highly effective in active RRMS. Patients most likely to benefit are young and still ambulatory with a relatively recent disease onset featuring highly active MS with MRI lesion activity and continued activity despite first- and second-line agents,” Dr. Goodman said.

While encouraging, the small patient numbers of the two trials and uncontrolled nature of the case series prevent conclusions concerning the therapeutic use of autologous HSCT in RRMS. Furthermore, risks of the approach include MS relapse, treatment-related adverse effects, adverse effects due to myelosuppression and immunoablation, and secondary autoimmune disorders that may arise at a later time.

Mesenchymal stem cell transplantation

MSCs offer the advantages of a variety of sources in adult tissue, established methods of culture, and either local or peripheral administration. Their finite capacity for proliferation is a drawback. Studies to date of MSC transplantation in MS have involved about 100 patients, so it is much too early to consider MSC use. Even if therapy is contemplated, whether it should be directed at quelling inflammation or to promote repair is undecided. As well, cell production and delivery issues need to be addressed, Dr. Goodman said.

Human oligodendrocyte progenitor cell transplantation

The implantation of CD 140a+ cell populations containing human oligodendrocyte progenitor cells (hOPCs) into the cerebral hemispheres of patients with non-relapsing secondary progressive MS as a means of stabilizing or improving neurological function is being planned. The NYSTEM project, with Dr. Goodman as a lead investigator, will first seek to identify the maximum tolerated dose of hOPCs.

The study is planned with the knowledge of safety issues that include cancer tumorigenesis. Yet exploration of the possible benefits will be evident only by testing in humans. “I don’t know of any way to find out except by trying,” Dr. Goodman said.

Dr. Goodman disclosed receiving research support and/or serving as a consultant to Avanir, Teva, Genzyme/Sanofi, Sun Pharma, Ono, Roche, AbbVie, Biogen, Novartis, Acorda, Purdue, and EMD Serono.

NEW ORLEANS – Stem cell-mediated functional regeneration continues to attract interest in the treatment of multiple sclerosis (MS). The reality, however, is daunting.

“Several types of cell-based therapeutic strategies are under investigation, with different risks, benefits, and goals. Some of these strategies show promise but significant methodological questions need to be answered,” Dr. Andrew D. Goodman said at a meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

Dr. Andrew Goodman

The present reality is that stem cell transplantation is not yet ready for general use to treat MS. Yet, the possible benefits of the approach demand further exploration, including clinical trials, according to Dr. Goodman, professor of neurology, chief of the neuroimmunology unit, and director of the multiple sclerosis center at the University of Rochester (N.Y.).

MS stem cell therapy hinges on the pluripotent nature of stem cells, particularly mesenchymal stem cells (MSCs) and hematopoietic stem cells. MS therapy would involve regeneration of nerve cell myelin in the brain and/or spinal cord and possibly suppression of inflammation.

Autologous HSC transplantation

Immunoablation followed by the autologous HSC transplantation (HSCT) has been explored in the ASTIMS phase II randomized trials (Neurology. Mar 10;84[10]:981-8 and HALT-MS, JAMA Neurol. 2015 Feb;72[2]:159-69), and in a Northwestern University case series (JAMA. 2015 Jan 20;313[3]:275-84).

ASTIMS compared high-dose chemotherapy followed by autologous HSCT with mitoxantrone (which is no longer used). Only 22% percent of patients had relapsing-remitting MS (RRMS; the one where HSCT generally works) and 78% had primary progressive MS (where HSCT generally does not work well or is not optimal). Yet, HSCT worked, with new T2 lesions reduced by 79%. No difference in disability progression was evident. Interim (3-year) results of HALT-MS were encouraging, with sustained remission of active RRMS and improved neurologic function. The Northwestern case series also documented improvements in neurologic disability and other clinical outcomes.

“The available data suggest that immunoablation and HSCT is highly effective in active RRMS. Patients most likely to benefit are young and still ambulatory with a relatively recent disease onset featuring highly active MS with MRI lesion activity and continued activity despite first- and second-line agents,” Dr. Goodman said.

While encouraging, the small patient numbers of the two trials and uncontrolled nature of the case series prevent conclusions concerning the therapeutic use of autologous HSCT in RRMS. Furthermore, risks of the approach include MS relapse, treatment-related adverse effects, adverse effects due to myelosuppression and immunoablation, and secondary autoimmune disorders that may arise at a later time.

Mesenchymal stem cell transplantation

MSCs offer the advantages of a variety of sources in adult tissue, established methods of culture, and either local or peripheral administration. Their finite capacity for proliferation is a drawback. Studies to date of MSC transplantation in MS have involved about 100 patients, so it is much too early to consider MSC use. Even if therapy is contemplated, whether it should be directed at quelling inflammation or to promote repair is undecided. As well, cell production and delivery issues need to be addressed, Dr. Goodman said.

Human oligodendrocyte progenitor cell transplantation

The implantation of CD 140a+ cell populations containing human oligodendrocyte progenitor cells (hOPCs) into the cerebral hemispheres of patients with non-relapsing secondary progressive MS as a means of stabilizing or improving neurological function is being planned. The NYSTEM project, with Dr. Goodman as a lead investigator, will first seek to identify the maximum tolerated dose of hOPCs.

The study is planned with the knowledge of safety issues that include cancer tumorigenesis. Yet exploration of the possible benefits will be evident only by testing in humans. “I don’t know of any way to find out except by trying,” Dr. Goodman said.

Dr. Goodman disclosed receiving research support and/or serving as a consultant to Avanir, Teva, Genzyme/Sanofi, Sun Pharma, Ono, Roche, AbbVie, Biogen, Novartis, Acorda, Purdue, and EMD Serono.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Stem cell therapy for MS: Steady progress but not ready for general use
Display Headline
Stem cell therapy for MS: Steady progress but not ready for general use
Sections
Article Source

EXPERT ANALYSIS FROM ACTRIMS FORUM 2016

PURLs Copyright

Inside the Article

QI and Patient Safety: No Longer Just an Elective for Trainees

Article Type
Changed
Fri, 09/14/2018 - 12:05
Display Headline
QI and Patient Safety: No Longer Just an Elective for Trainees

The demand for training in healthcare quality and patient safety, for both medical students and residents, has never been higher. The Quality and Safety Educators Academy (QSEA, sites.hospitalmedicine.org/qsea) responds to that demand by providing medical educators with the knowledge and tools to integrate quality improvement and safety concepts into their curricula.

Sponsored by the Society of Hospital Medicine (SHM) and the Alliance for Academic Internal Medicine (AAIM), QSEA 2016 is a two-and-a-half-day course designed as a faculty development program. This year, QSEA will be held at Tempe Mission Palms Hotel and Conference Center in Tempe, Ariz., from May 23 to 25.

Attendees will enjoy a hands-on, interactive learning environment with a 10-to-1 student-to-faculty ratio. Participants will develop a professional network and leave with a tool kit of educational resources and curricular tools for quality and safety education.

Think QSEA is for you? Make plans to attend now if you are:

  • A program director or assistant program director interested in acquiring new curricular ideas to help meet the ACGME requirements, which require residency programs to integrate quality and safety in their curriculum
  • A medical school leader or clerkship director developing quality and safety curricula for students
  • A faculty member beginning a new role or expanding an existing role in quality and safety education
  • A quality and safety leader who wishes to extend influence and effectiveness by learning strategies to teach and engage trainees

QSEA has sold out each of the past four years, so don’t delay. Register online at sites.hospitalmedicine.org/qsea/register.html or via phone at 800-843-3360. Questions? Email [email protected]. TH


Brett Radler is SHM’s communications coordinator.

QSEA Testimonials

Past attendees have great things to say about QSEA:

  • “As a ‘recovering private practice’ hospitalist, the conference helped me clarify how I can optimally fit within the academic triangle of clinical care, education, and research.”
  • “LOVED the curriculum development part—that was the missing piece for me.”
  • “The faculty was exceptional, and I was so impressed by both their accomplishments and their engagement in our education.”

Register Now for QSEA 2016

Register online at sites.hospitalmedicine.org/qsea/register.html or via phone at 800-843-3360. Email questions to [email protected].

Issue
The Hospitalist - 2016(02)
Publications
Sections

The demand for training in healthcare quality and patient safety, for both medical students and residents, has never been higher. The Quality and Safety Educators Academy (QSEA, sites.hospitalmedicine.org/qsea) responds to that demand by providing medical educators with the knowledge and tools to integrate quality improvement and safety concepts into their curricula.

Sponsored by the Society of Hospital Medicine (SHM) and the Alliance for Academic Internal Medicine (AAIM), QSEA 2016 is a two-and-a-half-day course designed as a faculty development program. This year, QSEA will be held at Tempe Mission Palms Hotel and Conference Center in Tempe, Ariz., from May 23 to 25.

Attendees will enjoy a hands-on, interactive learning environment with a 10-to-1 student-to-faculty ratio. Participants will develop a professional network and leave with a tool kit of educational resources and curricular tools for quality and safety education.

Think QSEA is for you? Make plans to attend now if you are:

  • A program director or assistant program director interested in acquiring new curricular ideas to help meet the ACGME requirements, which require residency programs to integrate quality and safety in their curriculum
  • A medical school leader or clerkship director developing quality and safety curricula for students
  • A faculty member beginning a new role or expanding an existing role in quality and safety education
  • A quality and safety leader who wishes to extend influence and effectiveness by learning strategies to teach and engage trainees

QSEA has sold out each of the past four years, so don’t delay. Register online at sites.hospitalmedicine.org/qsea/register.html or via phone at 800-843-3360. Questions? Email [email protected]. TH


Brett Radler is SHM’s communications coordinator.

QSEA Testimonials

Past attendees have great things to say about QSEA:

  • “As a ‘recovering private practice’ hospitalist, the conference helped me clarify how I can optimally fit within the academic triangle of clinical care, education, and research.”
  • “LOVED the curriculum development part—that was the missing piece for me.”
  • “The faculty was exceptional, and I was so impressed by both their accomplishments and their engagement in our education.”

Register Now for QSEA 2016

Register online at sites.hospitalmedicine.org/qsea/register.html or via phone at 800-843-3360. Email questions to [email protected].

The demand for training in healthcare quality and patient safety, for both medical students and residents, has never been higher. The Quality and Safety Educators Academy (QSEA, sites.hospitalmedicine.org/qsea) responds to that demand by providing medical educators with the knowledge and tools to integrate quality improvement and safety concepts into their curricula.

Sponsored by the Society of Hospital Medicine (SHM) and the Alliance for Academic Internal Medicine (AAIM), QSEA 2016 is a two-and-a-half-day course designed as a faculty development program. This year, QSEA will be held at Tempe Mission Palms Hotel and Conference Center in Tempe, Ariz., from May 23 to 25.

Attendees will enjoy a hands-on, interactive learning environment with a 10-to-1 student-to-faculty ratio. Participants will develop a professional network and leave with a tool kit of educational resources and curricular tools for quality and safety education.

Think QSEA is for you? Make plans to attend now if you are:

  • A program director or assistant program director interested in acquiring new curricular ideas to help meet the ACGME requirements, which require residency programs to integrate quality and safety in their curriculum
  • A medical school leader or clerkship director developing quality and safety curricula for students
  • A faculty member beginning a new role or expanding an existing role in quality and safety education
  • A quality and safety leader who wishes to extend influence and effectiveness by learning strategies to teach and engage trainees

QSEA has sold out each of the past four years, so don’t delay. Register online at sites.hospitalmedicine.org/qsea/register.html or via phone at 800-843-3360. Questions? Email [email protected]. TH


Brett Radler is SHM’s communications coordinator.

QSEA Testimonials

Past attendees have great things to say about QSEA:

  • “As a ‘recovering private practice’ hospitalist, the conference helped me clarify how I can optimally fit within the academic triangle of clinical care, education, and research.”
  • “LOVED the curriculum development part—that was the missing piece for me.”
  • “The faculty was exceptional, and I was so impressed by both their accomplishments and their engagement in our education.”

Register Now for QSEA 2016

Register online at sites.hospitalmedicine.org/qsea/register.html or via phone at 800-843-3360. Email questions to [email protected].

Issue
The Hospitalist - 2016(02)
Issue
The Hospitalist - 2016(02)
Publications
Publications
Article Type
Display Headline
QI and Patient Safety: No Longer Just an Elective for Trainees
Display Headline
QI and Patient Safety: No Longer Just an Elective for Trainees
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)