Affiliations
Department of Health Care Policy, Harvard Medical School and Department of Biostatistics, Harvard School of Public Health, Boston, Massachusetts
Given name(s)
Elizabeth E.
Family name
Drye
Degrees
MD, SM

Planned Readmission Algorithm

Article Type
Changed
Tue, 05/16/2017 - 22:59
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

Files
References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
Article PDF
Issue
Journal of Hospital Medicine - 10(10)
Publications
Page Number
670-677
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
Issue
Journal of Hospital Medicine - 10(10)
Issue
Journal of Hospital Medicine - 10(10)
Page Number
670-677
Page Number
670-677
Publications
Publications
Article Type
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Leora Horwitz, MD, Department of Population Health, NYU School of Medicine, 550 First Avenue, TRB, Room 607, New York, NY 10016; Telephone: 646‐501‐2685; Fax: 646‐501‐2706; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Hospital Mortality Measure for COPD

Article Type
Changed
Sun, 05/21/2017 - 18:06
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

Files
References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
Article PDF
Issue
Journal of Hospital Medicine - 8(8)
Publications
Page Number
428-435
Sections
Files
Files
Article PDF
Article PDF

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
Issue
Journal of Hospital Medicine - 8(8)
Issue
Journal of Hospital Medicine - 8(8)
Page Number
428-435
Page Number
428-435
Publications
Publications
Article Type
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Peter K. Lindenauer, MD, MSc, Baystate Medical Center, Center for Quality of Care Research, 759 Chestnut St., Springfield, MA 01199; Telephone: 413–794‐5987; Fax: 413–794–8866; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Mortality and Readmission Correlations

Article Type
Changed
Mon, 05/22/2017 - 18:28
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

Files
References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
Article PDF
Issue
Journal of Hospital Medicine - 7(9)
Publications
Page Number
690-696
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
Issue
Journal of Hospital Medicine - 7(9)
Issue
Journal of Hospital Medicine - 7(9)
Page Number
690-696
Page Number
690-696
Publications
Publications
Article Type
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Section of General Internal Medicine, Department of Medicine, Yale University School of Medicine, PO Box 208093, New Haven, CT 06520‐8093
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Continuing Medical Education Program in

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Continuing Medical Education Program in the Journal of Hospital Medicine

If you wish to receive credit for this activity, which beginson the next page, please refer to the website: www.blackwellpublishing.com/cme.

Accreditation and Designation Statement

Blackwell Futura Media Services designates this educational activity for a 1 AMA PRA Category 1 Credit. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Blackwell Futura Media Services is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians.

Educational Objectives

Upon completion of this educational activity, participants will be better able to:

  • Identify the approximate 30‐day readmission rate of Medicare patient hospitalized initially for pneumonia.

  • Distinguish which variables were accounted and unaccounted for in the development of a pneumonia readmission model.

Continuous participation in the Journal of Hospital Medicine CME program will enable learners to be better able to:

  • Interpret clinical guidelines and their applications for higher quality and more efficient care for all hospitalized patients.

  • Describe the standard of care for common illnesses and conditions treated in the hospital; such as pneumonia, COPD exacerbation, acute coronary syndrome, HF exacerbation, glycemic control, venous thromboembolic disease, stroke, etc.

  • Discuss evidence‐based recommendations involving transitions of care, including the hospital discharge process.

  • Gain insights into the roles of hospitalists as medical educators, researchers, medical ethicists, palliative care providers, and hospital‐based geriatricians.

  • Incorporate best practices for hospitalist administration, including quality improvement, patient safety, practice management, leadership, and demonstrating hospitalist value.

  • Identify evidence‐based best practices and trends for both adult and pediatric hospital medicine.

Instructions on Receiving Credit

For information on applicability and acceptance of continuing medical education credit for this activity, please consult your professional licensing board.

This activity is designed to be completed within the time designated on the title page; physicians should claim only those credits that reflect the time actually spent in the activity. To successfully earn credit, participants must complete the activity during the valid credit period that is noted on the title page.

Follow these steps to earn credit:

  • Log on to www.blackwellpublishing.com/cme.

  • Read the target audience, learning objectives, and author disclosures.

  • Read the article in print or online format.

  • Reflect on the article.

  • Access the CME Exam, and choose the best answer to each question.

  • Complete the required evaluation component of the activity.

Article PDF
Issue
Journal of Hospital Medicine - 6(3)
Publications
Page Number
141-141
Sections
Article PDF
Article PDF

If you wish to receive credit for this activity, which beginson the next page, please refer to the website: www.blackwellpublishing.com/cme.

Accreditation and Designation Statement

Blackwell Futura Media Services designates this educational activity for a 1 AMA PRA Category 1 Credit. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Blackwell Futura Media Services is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians.

Educational Objectives

Upon completion of this educational activity, participants will be better able to:

  • Identify the approximate 30‐day readmission rate of Medicare patient hospitalized initially for pneumonia.

  • Distinguish which variables were accounted and unaccounted for in the development of a pneumonia readmission model.

Continuous participation in the Journal of Hospital Medicine CME program will enable learners to be better able to:

  • Interpret clinical guidelines and their applications for higher quality and more efficient care for all hospitalized patients.

  • Describe the standard of care for common illnesses and conditions treated in the hospital; such as pneumonia, COPD exacerbation, acute coronary syndrome, HF exacerbation, glycemic control, venous thromboembolic disease, stroke, etc.

  • Discuss evidence‐based recommendations involving transitions of care, including the hospital discharge process.

  • Gain insights into the roles of hospitalists as medical educators, researchers, medical ethicists, palliative care providers, and hospital‐based geriatricians.

  • Incorporate best practices for hospitalist administration, including quality improvement, patient safety, practice management, leadership, and demonstrating hospitalist value.

  • Identify evidence‐based best practices and trends for both adult and pediatric hospital medicine.

Instructions on Receiving Credit

For information on applicability and acceptance of continuing medical education credit for this activity, please consult your professional licensing board.

This activity is designed to be completed within the time designated on the title page; physicians should claim only those credits that reflect the time actually spent in the activity. To successfully earn credit, participants must complete the activity during the valid credit period that is noted on the title page.

Follow these steps to earn credit:

  • Log on to www.blackwellpublishing.com/cme.

  • Read the target audience, learning objectives, and author disclosures.

  • Read the article in print or online format.

  • Reflect on the article.

  • Access the CME Exam, and choose the best answer to each question.

  • Complete the required evaluation component of the activity.

If you wish to receive credit for this activity, which beginson the next page, please refer to the website: www.blackwellpublishing.com/cme.

Accreditation and Designation Statement

Blackwell Futura Media Services designates this educational activity for a 1 AMA PRA Category 1 Credit. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Blackwell Futura Media Services is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians.

Educational Objectives

Upon completion of this educational activity, participants will be better able to:

  • Identify the approximate 30‐day readmission rate of Medicare patient hospitalized initially for pneumonia.

  • Distinguish which variables were accounted and unaccounted for in the development of a pneumonia readmission model.

Continuous participation in the Journal of Hospital Medicine CME program will enable learners to be better able to:

  • Interpret clinical guidelines and their applications for higher quality and more efficient care for all hospitalized patients.

  • Describe the standard of care for common illnesses and conditions treated in the hospital; such as pneumonia, COPD exacerbation, acute coronary syndrome, HF exacerbation, glycemic control, venous thromboembolic disease, stroke, etc.

  • Discuss evidence‐based recommendations involving transitions of care, including the hospital discharge process.

  • Gain insights into the roles of hospitalists as medical educators, researchers, medical ethicists, palliative care providers, and hospital‐based geriatricians.

  • Incorporate best practices for hospitalist administration, including quality improvement, patient safety, practice management, leadership, and demonstrating hospitalist value.

  • Identify evidence‐based best practices and trends for both adult and pediatric hospital medicine.

Instructions on Receiving Credit

For information on applicability and acceptance of continuing medical education credit for this activity, please consult your professional licensing board.

This activity is designed to be completed within the time designated on the title page; physicians should claim only those credits that reflect the time actually spent in the activity. To successfully earn credit, participants must complete the activity during the valid credit period that is noted on the title page.

Follow these steps to earn credit:

  • Log on to www.blackwellpublishing.com/cme.

  • Read the target audience, learning objectives, and author disclosures.

  • Read the article in print or online format.

  • Reflect on the article.

  • Access the CME Exam, and choose the best answer to each question.

  • Complete the required evaluation component of the activity.

Issue
Journal of Hospital Medicine - 6(3)
Issue
Journal of Hospital Medicine - 6(3)
Page Number
141-141
Page Number
141-141
Publications
Publications
Article Type
Display Headline
Continuing Medical Education Program in the Journal of Hospital Medicine
Display Headline
Continuing Medical Education Program in the Journal of Hospital Medicine
Sections
Article Source
Copyright © 2011 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Pneumonia Readmission Validation

Article Type
Changed
Thu, 05/25/2017 - 21:25
Display Headline
Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia

Hospital readmissions are emblematic of the numerous challenges facing the US health care system. Despite high levels of spending, nearly 20% of Medicare beneficiaries are readmitted within 30 days of hospital discharge, many readmissions are considered preventable, and rates vary widely by hospital and region.1 Further, while readmissions have been estimated to cost taxpayers as much as $17 billion annually, the current fee‐for‐service method of paying for the acute care needs of seniors rewards hospitals financially for readmission, not their prevention.2

Pneumonia is the second most common reason for hospitalization among Medicare beneficiaries, accounting for approximately 650,000 admissions annually,3 and has been a focus of national quality‐improvement efforts for more than a decade.4, 5 Despite improvements in key processes of care, rates of readmission within 30 days of discharge following a hospitalization for pneumonia have been reported to vary from 10% to 24%.68 Among several factors, readmissions are believed to be influenced by the quality of both inpatient and outpatient care, and by care‐coordination activities occurring in the transition from inpatient to outpatient status.912

Public reporting of hospital performance is considered a key strategy for improving quality, reducing costs, and increasing the value of hospital care, both in the US and worldwide.13 In 2009, the Centers for Medicare & Medicaid Services (CMS) expanded its reporting initiatives by adding risk‐adjusted hospital readmission rates for acute myocardial infarction, heart failure, and pneumonia to the Hospital Compare website.14, 15 Readmission rates are an attractive focus for public reporting for several reasons. First, in contrast to most process‐based measures of quality (eg, whether a patient with pneumonia received a particular antibiotic), a readmission is an adverse outcome that matters to patients and families.16 Second, unlike process measures whose assessment requires detailed review of medical records, readmissions can be easily determined from standard hospital claims. Finally, readmissions are costly, and their prevention could yield substantial savings to society.

A necessary prerequisite for public reporting of readmission is a validated, risk‐adjusted measure that can be used to track performance over time and can facilitate comparisons across institutions. Toward this end, we describe the development, validation, and results of a National Quality Forum‐approved and CMS‐adopted model to estimate hospital‐specific, risk‐standardized, 30‐day readmission rates for Medicare patients hospitalized with pneumonia.17

METHODS

Data Sources

We used 20052006 claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files to develop and validate the administrative model. The Medicare Enrollment Database was used to determine Medicare fee‐for‐service enrollment and mortality statuses. A medical record model, used for additional validation of the administrative model, was developed using information abstracted from the charts of 75,616 pneumonia cases from 19982001 as part of the National Pneumonia Project, a CMS quality improvement initiative.18

Study Cohort

We identified hospitalizations of patients 65 years of age and older with a principal diagnosis of pneumonia (International Classification of Diseases, 9th Revision, Clinical Modification codes 480.XX, 481, 482.XX, 483.X, 485, 486, 487.0) as potential index pneumonia admissions. Because our focus was readmission for patients discharged from acute care settings, we excluded admissions in which patients died or were transferred to another acute care facility. Additionally, we restricted analysis to patients who had been enrolled in fee‐for‐service Medicare Parts A and B, for at least 12 months prior to their pneumonia hospitalization, so that we could use diagnostic codes from all inpatient and outpatient encounters during that period to enhance identification of comorbidities.

Outcome

The outcome was 30‐day readmission, defined as occurrence of at least one hospitalization for any cause within 30 days of discharge after an index admission. Readmissions were identified from hospital claims data, and were attributed to the hospital that had discharged the patient. A 30‐day time frame was selected because it is a clinically meaningful period during which hospitals can be expected to collaborate with other organizations and providers to implement measures to reduce the risk of rehospitalization.

Candidate and Final Model Variables

Candidate variables for the administrative claims model were selected by a clinician team from 189 diagnostic groups included in the Hierarchical Condition Category (HCC) clinical classification system.19 The HCC clinical classification system was developed for CMS in preparation for all‐encounter risk adjustment for Medicare Advantage (managed care). Under the HCC algorithm, the 15,000+ ICD‐9‐CM diagnosis codes are assigned to one of 189 clinically‐coherent condition categories (CCs). We used the April 2008 version of the ICD‐9‐CM to CC assignment map, which is maintained by CMS and posted at http://www.qualitynet.org. A total of 154 CCs were considered to be potentially relevant to readmission outcome and were included for further consideration. Some CCs were further combined into clinically coherent groupings of CCs. Our set of candidate variables ultimately included 97 CC‐based variables, two demographic variables (age and sex), and two procedure codes potentially relevant to readmission risk (history of percutaneous coronary intervention [PCI] and history of coronary artery bypass graft [CABG]).

The final risk‐adjustment model included 39 variables selected by the team of clinicians and analysts, primarily based on their clinical relevance but also with knowledge of the strength of their statistical association with readmission outcome (Table 1). For each patient, the presence or absence of these conditions was assessed from multiple sources, including secondary diagnoses during the index admission, principal and secondary diagnoses from hospital admissions in the 12 months prior to the index admission, and diagnoses from hospital outpatient and physician encounters 12 months before the index admission. A small number of CCs were considered to represent potential complications of care (eg, bleeding). Because we did not want to adjust for complications of care occurring during the index admission, a patient was not considered to have one of these conditions unless it was also present in at least one encounter prior to the index admission.

Regression Model Variables and Results in Derivation Sample
VariableFrequenciesEstimateStandard ErrorOdds Ratio95% CI 
  • Abbreviations: CABG, coronary artery bypass graft; CC, condition category; CI, confidence interval; COPD, chronic obstructive pulmonary disease; DM, diabetes mellitus.

Intercept 2.3950.021   
Age 65 (years above 65, continuous) 0.00010.0011.0000.9981.001
Male450.0710.0121.0731.0481.099
History of CABG5.20.1790.0270.8360.7930.881
Metastatic cancer and acute leukemia (CC 7)4.30.1770.0291.1941.1281.263
Lung, upper digestive tract, and other severe cancers (CC 8)6.00.2560.0241.2921.2321.354
Diabetes and DM complications (CC 15‐20, 119, 120)360.0590.0121.0611.0361.087
Disorders of fluid/electrolyte/acid‐base (CC 22, 23)340.1490.0131.1601.1311.191
Iron deficiency and other/unspecified anemias and blood disease (CC 47)460.1180.0121.1261.0991.153
Other psychiatric disorders (CC 60)120.1080.0171.1141.0771.151
Cardio‐respiratory failure and shock (CC 79)160.1140.0161.1211.0871.156
Congestive heart failure (CC 80)390.1510.0141.1631.1331.194
Chronic atherosclerosis (CC 83, 84)470.0510.0131.0531.0271.079
Valvular and rheumatic heart disease (CC 86)230.0620.0141.0641.0361.093
Arrhythmias (CC 92, 93)380.1260.0131.1341.1071.163
Vascular or circulatory disease (CC 104‐106)380.0880.0121.0921.0661.119
COPD (CC 108)580.1860.0131.2051.1751.235
Fibrosis of lung and other chronic lung disorders (CC 109)170.0860.0151.0901.0591.122
Renal failure (CC 131)170.1470.0161.1581.1221.196
Protein‐calorie malnutrition (CC 21)7.90.1210.0201.1291.0861.173
History of infection (CC 1, 3‐6)350.0680.0121.0711.0451.097
Severe hematological disorders (CC 44)3.60.1170.0281.1251.0641.188
Decubitus ulcer or chronic skin ulcer (CC 148, 149)100.1010.0181.1061.0671.146
History of pneumonia (CC 111‐113)440.0650.0131.0671.0411.094
Vertebral fractures (CC 157)5.10.1130.0241.1201.0681.174
Other injuries (CC 162)320.0610.0121.0631.0381.089
Urinary tract infection (CC 135)260.0640.0141.0661.0381.095
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal, and other cancers and tumors (CC 9‐10)160.0500.0161.0511.0181.084
End‐stage renal disease or dialysis (CC 129, 130)1.90.1310.0371.1401.0601.226
Drug/alcohol abuse/dependence/psychosis (CC 51‐53)120.0810.0171.0841.0481.121
Septicemia/shock (CC 2)6.30.0940.0221.0981.0521.146
Other gastrointestinal disorders (CC 36)560.0730.0121.0761.0511.102
Acute coronary syndrome (CC 81, 82)8.30.1260.0191.1341.0921.178
Pleural effusion/pneumothorax (CC 114)120.0830.0171.0861.0511.123
Other urinary tract disorders (CC 136)240.0590.0141.0611.0331.090
Stroke (CC 95, 96)100.0470.0191.0491.0111.088
Dementia and senility (CC 49, 50)270.0310.0141.0311.0041.059
Hemiplegia, paraplegia, paralysis, functional disability (CC 67‐69, 100‐102, 177, 178)7.40.0680.0211.0701.0261.116
Other lung disorders (CC 115)450.0050.0121.0050.9821.030
Major psychiatric disorders (CC 54‐56)110.0380.0181.0381.0031.075
Asthma (CC 110)120.0060.0181.0060.9721.041

Model Derivation

For the development of the administrative claims model, we randomly sampled half of 2006 hospitalizations that met inclusion criteria. To assess model performance at the patient level, we calculated the area under the receiver operating curve (AUC), and calculated observed readmission rates in the lowest and highest deciles on the basis of predicted readmission probabilities. We also compared performance with a null model, a model that adjusted for age and sex, and a model that included all candidate variables.20

Risk‐Standardized Readmission Rates

Using hierarchical logistic regression, we modeled the log‐odds of readmission within 30 days of discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics, and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation, or clustering, of observed outcomes, and models the assumption that underlying differences in quality among hospitals being evaluated lead to systematic differences in outcomes. We then calculated hospital‐specific readmission rates as the ratio of predicted‐to‐expected readmissions (similar to observed/expected ratio), multiplied by the national unadjusted ratea form of indirect standardization. Predicted number of readmissions in each hospital is estimated given the same patient mix and its estimated hospital‐specific intercept. Expected number of readmissions in each hospital is estimated using its patient mix and the average hospital‐specific intercept. To assess hospital performance in any given year, we re‐estimate model coefficients using that year's data.

Model Validation: Administrative Claims

We compared the model performance in the development sample with its performance in the sample from the 2006 data that was not selected for the development set, and separately among pneumonia admissions in 2005. The model was recalibrated in each validation set.

Model Validation: Medical Record Abstraction

We developed a separate medical record‐based model of readmission risk using information from charts that had previously been abstracted as part of CMS's National Pneumonia Project. To select variables for this model, the clinician team: 1) reviewed the list of variables that were included in a medical record model that was previously developed for validating the National Quality Forum‐approved pneumonia mortality measure; 2) reviewed a list of other potential candidate variables available in the National Pneumonia Project dataset; and 3) reviewed variables that emerged as potentially important predictors of readmission, based on a systematic review of the literature that was conducted as part of measure development. This selection process resulted in a final medical record model that included 35 variables.

We linked patients in the National Pneumonia Project cohort to their Medicare claims data, including claims from one year before the index hospitalization, so that we could calculate risk‐standardized readmission rates in this cohort separately using medical record and claims‐based models. This analysis was conducted at the state level, for the 50 states plus the District of Columbia and Puerto Rico, because medical record data were unavailable in sufficient numbers to permit hospital‐level comparisons. To examine the relationship between risk‐standardized rates obtained from medical record and administrative data models, we estimated a linear regression model describing the association between the two rates, weighting each state by number of index hospitalizations, and calculated the correlation coefficient and the intercept and slope of this equation. A slope close to 1 and an intercept close to 0 would provide evidence that risk‐standardized state readmission rates from the medical record and claims models were similar. We also calculated the difference between state risk‐standardized readmission rates from the two models.

Analyses were conducted with the use of SAS version 9.1.3 (SAS Institute Inc, Cary, NC). Models were fitted separately for the National Pneumonia Project and 2006 cohort. We estimated the hierarchical models using the GLIMMIX procedure in SAS. The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation and Performance

After exclusions were applied, the 2006 sample included 453,251 pneumonia hospitalizations (Figure 1). The development sample consisted of 226,545 hospitalizations at 4675 hospitals, with an overall unadjusted 30‐day readmission rate of 17.4%. In 11,694 index cases (5.2%), the patient died within 30 days without being readmitted. Median readmission rate was 16.3%, 25th and 75th percentile rates were 11.1% and 21.3%, and at the 10th and 90th percentile, hospital readmission rates ranged from 4.6% to 26.7% (Figure 2).

Figure 1
Pneumonia admissions included in measure calculation.
Figure 2
Distribution of unadjusted readmission rates.

The claims model included 39 variables (age, sex, and 37 clinical variables) (Table 1). The mean age of the cohort was 80.0 years, with 55.5% women and 11.1% nonwhite patients. Mean observed readmission rate in the development sample ranged from 9% in the lowest decile of predicted pneumonia readmission rates to 32% in the highest predicted decile, a range of 23%. The AUC was 0.63. For comparison, a model with only age and sex had an AUC of 0.51, and a model with all candidate variables had an AUC equal to 0.63 (Table 2).

Readmission Model Performance of Administrative Claims Models
 Calibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • NOTE: Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted Probabilities (p) = 1/(1+exp{Xb}), and Z = Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample; eg, Logit(P(Y = 1|Z)) = 0 + 1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Development sample
2006(1st half) N = 226,545(0, 1)(0.09, 0.32)0.63082.627.399.996,843 (40)
Validation sample
2006(2nd half) N = 226,706(0.002, 0.997)(0.09, 0.31)0.63082.557.459.996,870 (40)
2005N = 536,015(0.035, 1.008)(0.08, 0.31)0.63082.677.3110.0316,241 (40)

Hospital Risk‐Standardized Readmission Rates

Risk‐standardized readmission rates varied across hospitals (Figure 3). Median risk‐standardized readmission rate was 17.3%, and the 25th and 75th percentiles were 16.9% and 17.9%, respectively. The 5th percentile was 16.0% and the 95th percentile was 19.1%. Odds of readmission for a hospital one standard deviation above average was 1.4 times that of a hospital one standard deviation below average.

Figure 3
Distribution of risk‐standardized readmission rates.

Administrative Model Validation

In the remaining 50% of pneumonia index hospitalizations from 2006, and the entire 2005 cohort, regression coefficients and standard errors of model variables were similar to those in the development data set. Model performance using 2005 data was consistent with model performance using the 2006 development and validation half‐samples (Table 2).

Medical Record Validation

After exclusions, the medical record sample taken from the National Pneumonia Project included 47,429 cases, with an unadjusted 30‐day readmission rate of 17.0%. The final medical record risk‐adjustment model included a total of 35 variables, whose prevalence and association with readmission risk varied modestly (Table 3). Performance of the medical record and administrative models was similar (areas under the ROC curve 0.59 and 0.63, respectively) (Table 4). Additionally, in the administrative model, predicted readmission rates ranged from 8% in the lowest predicted decile to 30% in the highest predicted decile, while in the medical record model, the corresponding rates varied from 10% to 26%.

Regression Model Results from Medical Record Sample
VariablePercentEstimateStandard ErrorOdds Ratio95% CI
  • NOTE: Between‐state variance = 0.024; standard error = 0.00.

  • Abbreviations: BP, blood pressure; BUN, blood urea nitrogen; CI, confidence interval; SD, standard deviation; WBC, white blood cell count.

Age 65, mean (SD)15.24 (7.87)0.0030.0020.9970.9931.000
Male46.180.1220.0251.1301.0751.188
Nursing home resident17.710.0350.0371.0360.9631.114
Neoplastic disease6.800.1300.0491.1391.0341.254
Liver disease1.040.0890.1230.9150.7191.164
History of heart failure28.980.2340.0291.2641.1941.339
History of renal disease8.510.1880.0471.2061.1001.323
Altered mental status17.950.0090.0341.0090.9441.080
Pleural effusion21.200.1650.0301.1791.1111.251
BUN 30 mg/dl23.280.1600.0331.1741.1001.252
BUN missing14.560.1010.1850.9040.6301.298
Systolic BP <90 mmHg2.950.0680.0701.0700.9321.228
Systolic BP missing11.210.1490.4251.1600.5042.669
Pulse 125/min7.730.0360.0471.0360.9451.137
Pulse missing11.220.2100.4051.2340.5582.729
Respiratory rate 30/min16.380.0790.0341.0821.0121.157
Respiratory rate missing11.390.2040.2401.2260.7651.964
Sodium <130 mmol/L4.820.1360.0571.1451.0251.280
Sodium missing14.390.0490.1431.0500.7931.391
Glucose 250 mg/dl5.190.0050.0570.9950.8891.114
Glucose missing15.440.1560.1050.8550.6961.051
Hematocrit <30%7.770.2700.0441.3101.2021.428
Hematocrit missing13.620.0710.1350.9320.7151.215
Creatinine 2.5 mg/dL4.680.1090.0621.1150.9891.258
Creatinine missing14.630.2000.1671.2210.8801.695
WBC 6‐12 b/L38.040.0210.0490.9790.8891.079
WBC >12 b/L41.450.0680.0490.9340.8481.029
WBC missing12.850.1670.1621.1810.8601.623
Immunosuppressive therapy15.010.3470.0351.4151.3211.516
Chronic lung disease42.160.1370.0281.1471.0861.211
Coronary artery disease39.570.1500.0281.1621.1001.227
Diabetes mellitus20.900.1370.0331.1471.0761.223
Alcohol/drug abuse3.400.0990.0710.9060.7881.041
Dementia/Alzheimer's disease16.380.1250.0381.1331.0521.222
Splenectomy0.440.0160.1861.0160.7061.463
Model Performance of Medical Record Model
ModelCalibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Medical Record Model Development Sample (NP)
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.10, 0.26)0.59083.045.2811.68710 (35)
Linked Administrative Model Validation Sample
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.08, 0.30)0.63083.046.9410.011,414 (40)

The correlation coefficient of the estimated state‐specific standardized readmission rates from the administrative and medical record models was 0.96, and the proportion of the variance explained by the model was 0.92 (Figure 4).

Figure 4
Comparison of state‐level risk‐standardized readmission rates from medical record and administrative models. Abbreviations: HGLM, hierarchical generalized linear models.

DISCUSSION

We have described the development, validation, and results of a hospital, 30‐day, risk‐standardized readmission model for pneumonia that was created to support current federal transparency initiatives. The model uses administrative claims data from Medicare fee‐for‐service patients and produces results that are comparable to a model based on information obtained through manual abstraction of medical records. We observed an overall 30‐day readmission rate of 17%, and our analyses revealed substantial variation across US hospitals, suggesting that improvement by lower performing institutions is an achievable goal.

Because more than one in six pneumonia patients are rehospitalized shortly after discharge, and because pneumonia hospitalizations represent an enormous expense to the Medicare program, prevention of readmissions is now widely recognized to offer a substantial opportunity to improve patient outcomes while simultaneously lowering health care costs. Accordingly, promotion of strategies to reduce readmission rates has become a key priority for payers and quality‐improvement organizations. These range from policy‐level attempts to stimulate change, such as publicly reporting hospital readmission rates on government websites, to establishing accreditation standardssuch as the Joint Commission's requirement to accurately reconcile medications, to the creation of quality improvement collaboratives focused on sharing best practices across institutions. Regardless of the approach taken, a valid, risk‐adjusted measure of performance is required to evaluate and track performance over time. The measure we have described meets the National Quality Forum's measure evaluation criteria in that it addresses an important clinical topic for which there appears to be significant opportunities for improvement, the measure is precisely defined and has been subjected to validity and reliability testing, it is risk‐adjusted based on patient clinical factors present at the start of care, is feasible to produce, and is understandable by a broad range of potential users.21 Because hospitalists are the physicians primarily responsible for the care of patients with pneumonia at US hospitals, and because they frequently serve as the physician champions for quality improvement activities related to pneumonia, it is especially important that they maintain a thorough understanding of the measures and methodologies underlying current efforts to measure hospital performance.

Several features of our approach warrant additional comment. First, we deliberately chose to measure all readmission events rather than attempt to discriminate between potentially preventable and nonpreventable readmissions. From the patient perspective, readmission for any reason is a concern, and limiting the measure to pneumonia‐related readmissions could make it susceptible to gaming by hospitals. Moreover, determining whether a readmission is related to a potential quality problem is not straightforward. For example, a patient with pneumonia whose discharge medications were prescribed incorrectly may be readmitted with a hip fracture following an episode of syncope. It would be inappropriate to treat this readmission as unrelated to the care the patient received for pneumonia. Additionally, while our approach does not presume that every readmission is preventable, the goal is to reduce the risk of readmissions generally (not just in narrowly defined subpopulations), and successful interventions to reduce rehospitalization have typically demonstrated reductions in all‐cause readmission.9, 22 Second, deaths that occurred within 30 days of discharge, yet that were not accompanied by a hospital readmission, were not counted as a readmission outcome. While it may seem inappropriate to treat a postdischarge death as a nonevent (rather than censoring or excluding such cases), alternative analytic approaches, such as using a hierarchical survival model, are not currently computationally feasible with large national data sets. Fortunately, only a relatively small proportion of discharges fell into this category (5.2% of index cases in the 2006 development sample died within 30 days of discharge without being readmitted). An alternative approach to handling the competing outcome of death would have been to use a composite outcome of readmission or death. However, we believe that it is important to report the outcomes separately because factors that predict readmission and mortality may differ, and when making comparisons across hospitals it would not be possible to determine whether differences in rate were due to readmission or mortality. Third, while the patient‐level readmission model showed only modest discrimination, we intentionally excluded covariates such as race and socioeconomic status, as well as in‐hospital events and potential complications of care, and whether patients were discharged home or to a skilled nursing facility. While these variables could have improved predictive ability, they may be directly or indirectly related to quality or supply factors that should not be included in a model that seeks to control for patient clinical characteristics. For example, if hospitals with a large share of poor patients have higher readmission rates, then including income in the model will obscure differences that are important to identify. While we believe that the decision to exclude such factors in the model is in the best interest of patients, and supports efforts to reduce health inequality in society more generally, we also recognize that hospitals that care for a disproportionate share of poor patients are likely to require additional resources to overcome these social factors. Fourth, we limited the analysis to patients with a principal diagnosis of pneumonia, and chose not to also include those with a principal diagnosis of sepsis or respiratory failure coupled with a secondary diagnosis of pneumonia. While the broader definition is used by CMS in the National Pneumonia Project, that initiative relied on chart abstraction to differentiate pneumonia present at the time of admission from cases developing as a complication of hospitalization. Additionally, we did not attempt to differentiate between community‐acquired and healthcare‐associated pneumonia, however our approach is consistent with the National Pneumonia Project and Pneumonia Patient Outcomes Research Team.18 Fifth, while our model estimates readmission rates at the hospital level, we recognize that readmissions are influenced by a complex and extensive range of factors. In this context, greater cooperation between hospitals and other care providers will almost certainly be required in order to achieve dramatic improvement in readmission rates, which in turn will depend upon changes to the way serious illness is paid for. Some options that have recently been described include imposing financial penalties for early readmission, extending the boundaries of case‐based payment beyond hospital discharge, and bundling payments between hospitals and physicians.2325

Our measure has several limitations. First, our models were developed and validated using Medicare data, and the results may not apply to pneumonia patients less than 65 years of age. However, most patients hospitalized with pneumonia in the US are 65 or older. In addition, we were unable to test the model with a Medicare managed care population, because data are not currently available on such patients. Finally, the medical record‐based validation was conducted by state‐level analysis because the sample size was insufficient to carry this out at the hospital level.

In conclusion, more than 17% of Medicare beneficiaries are readmitted within 30 days following discharge after a hospitalization for pneumonia, and rates vary substantially across institutions. The development of a valid measure of hospital performance and public reporting are important first steps towards focusing attention on this problem. Actual improvement will now depend on whether hospitals and partner organizations are successful at identifying and implementing effective methods to prevent readmission.

Files
References
  1. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
  2. Medicare Payment Advisory Commission.Report to the Congress: Promoting Greater Efficiency in Medicare.2007.
  3. Levit K,Wier L,Ryan K,Elixhauser A,Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007.2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed November 7, 2009.
  4. Centers for Medicare 353(3):255264.
  5. Baker DW,Einstadter D,Husak SS,Cebul RD.Trends in postdischarge mortality and readmissions: has length of stay declined too far?Arch Intern Med.2004;164(5):538544.
  6. Vecchiarino P,Bohannon RW,Ferullo J,Maljanian R.Short‐term outcomes and their predictors for patients hospitalized with community‐acquired pneumonia.Heart Lung.2004;33(5):301307.
  7. Dean NC,Bateman KA,Donnelly SM, et al.Improved clinical outcomes with utilization of a community‐acquired pneumonia guideline.Chest.2006;130(3):794799.
  8. Gleason PP,Meehan TP,Fine JM,Galusha DH,Fine MJ.Associations between initial antimicrobial therapy and medical outcomes for hospitalized elderly patients with pneumonia.Arch Intern Med.1999;159(21):25622572.
  9. Benbassat J,Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  10. Coleman EA,Parry C,Chalmers S,Min S.The care transitions intervention: results of a randomized controlled trial.Arch Intern Med.2006;166(17):18221828.
  11. Corrigan JM, Eden J, Smith BM, eds.Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Committee on Enhancing Federal Healthcare Quality Programs.Washington, DC:National Academies Press,2003.
  12. Medicare.gov—Hospital Compare. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp?version=default1(1):2937.
  13. Krumholz HM,Normand ST,Spertus JA,Shahian DM,Bradley EH.Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement.Health Aff.2007;26(1):7585.
  14. NQF‐Endorsed® Standards. Available at: http://www.qualityforum.org/Measures_List.aspx. Accessed November 6,2009.
  15. Houck PM,Bratzler DW,Nsa W,Ma A,Bartlett JG.Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community‐acquired pneumonia.Arch Intern Med.2004;164(6):637644.
  16. Pope G,Ellis R,Ash A. Diagnostic Cost Group Hierarchical Condition Category Models for Medicare Risk Adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc;2000. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed November 7, 2009.
  17. Harrell FEJ.Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis.1st ed.New York:Springer;2006.
  18. National Quality Forum—Measure Evaluation Criteria.2008. Available at: http://www.qualityforum.org/uploadedFiles/Quality_Forum/Measuring_Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008–08‐28Final.pdf?n=4701.
  19. Naylor MD,Brooten D,Campbell R, et al.Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial.JAMA.1999;281(7):613620.
  20. Davis K.Paying for care episodes and care coordination.N Engl J Med.2007;356(11):11661168.
  21. Luft HS.Health care reform—toward more freedom, and responsibility, for physicians.N Engl J Med.2009;361(6):623628.
  22. Rosenthal MB.Beyond pay for performance—emerging models of provider‐payment reform.N Engl J Med.2008;359(12):11971200.
Article PDF
Issue
Journal of Hospital Medicine - 6(3)
Publications
Page Number
142-150
Legacy Keywords
cost analysis, cost per day, end of life, hospice, length of stay, palliative care, triggers
Sections
Files
Files
Article PDF
Article PDF

Hospital readmissions are emblematic of the numerous challenges facing the US health care system. Despite high levels of spending, nearly 20% of Medicare beneficiaries are readmitted within 30 days of hospital discharge, many readmissions are considered preventable, and rates vary widely by hospital and region.1 Further, while readmissions have been estimated to cost taxpayers as much as $17 billion annually, the current fee‐for‐service method of paying for the acute care needs of seniors rewards hospitals financially for readmission, not their prevention.2

Pneumonia is the second most common reason for hospitalization among Medicare beneficiaries, accounting for approximately 650,000 admissions annually,3 and has been a focus of national quality‐improvement efforts for more than a decade.4, 5 Despite improvements in key processes of care, rates of readmission within 30 days of discharge following a hospitalization for pneumonia have been reported to vary from 10% to 24%.68 Among several factors, readmissions are believed to be influenced by the quality of both inpatient and outpatient care, and by care‐coordination activities occurring in the transition from inpatient to outpatient status.912

Public reporting of hospital performance is considered a key strategy for improving quality, reducing costs, and increasing the value of hospital care, both in the US and worldwide.13 In 2009, the Centers for Medicare & Medicaid Services (CMS) expanded its reporting initiatives by adding risk‐adjusted hospital readmission rates for acute myocardial infarction, heart failure, and pneumonia to the Hospital Compare website.14, 15 Readmission rates are an attractive focus for public reporting for several reasons. First, in contrast to most process‐based measures of quality (eg, whether a patient with pneumonia received a particular antibiotic), a readmission is an adverse outcome that matters to patients and families.16 Second, unlike process measures whose assessment requires detailed review of medical records, readmissions can be easily determined from standard hospital claims. Finally, readmissions are costly, and their prevention could yield substantial savings to society.

A necessary prerequisite for public reporting of readmission is a validated, risk‐adjusted measure that can be used to track performance over time and can facilitate comparisons across institutions. Toward this end, we describe the development, validation, and results of a National Quality Forum‐approved and CMS‐adopted model to estimate hospital‐specific, risk‐standardized, 30‐day readmission rates for Medicare patients hospitalized with pneumonia.17

METHODS

Data Sources

We used 20052006 claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files to develop and validate the administrative model. The Medicare Enrollment Database was used to determine Medicare fee‐for‐service enrollment and mortality statuses. A medical record model, used for additional validation of the administrative model, was developed using information abstracted from the charts of 75,616 pneumonia cases from 19982001 as part of the National Pneumonia Project, a CMS quality improvement initiative.18

Study Cohort

We identified hospitalizations of patients 65 years of age and older with a principal diagnosis of pneumonia (International Classification of Diseases, 9th Revision, Clinical Modification codes 480.XX, 481, 482.XX, 483.X, 485, 486, 487.0) as potential index pneumonia admissions. Because our focus was readmission for patients discharged from acute care settings, we excluded admissions in which patients died or were transferred to another acute care facility. Additionally, we restricted analysis to patients who had been enrolled in fee‐for‐service Medicare Parts A and B, for at least 12 months prior to their pneumonia hospitalization, so that we could use diagnostic codes from all inpatient and outpatient encounters during that period to enhance identification of comorbidities.

Outcome

The outcome was 30‐day readmission, defined as occurrence of at least one hospitalization for any cause within 30 days of discharge after an index admission. Readmissions were identified from hospital claims data, and were attributed to the hospital that had discharged the patient. A 30‐day time frame was selected because it is a clinically meaningful period during which hospitals can be expected to collaborate with other organizations and providers to implement measures to reduce the risk of rehospitalization.

Candidate and Final Model Variables

Candidate variables for the administrative claims model were selected by a clinician team from 189 diagnostic groups included in the Hierarchical Condition Category (HCC) clinical classification system.19 The HCC clinical classification system was developed for CMS in preparation for all‐encounter risk adjustment for Medicare Advantage (managed care). Under the HCC algorithm, the 15,000+ ICD‐9‐CM diagnosis codes are assigned to one of 189 clinically‐coherent condition categories (CCs). We used the April 2008 version of the ICD‐9‐CM to CC assignment map, which is maintained by CMS and posted at http://www.qualitynet.org. A total of 154 CCs were considered to be potentially relevant to readmission outcome and were included for further consideration. Some CCs were further combined into clinically coherent groupings of CCs. Our set of candidate variables ultimately included 97 CC‐based variables, two demographic variables (age and sex), and two procedure codes potentially relevant to readmission risk (history of percutaneous coronary intervention [PCI] and history of coronary artery bypass graft [CABG]).

The final risk‐adjustment model included 39 variables selected by the team of clinicians and analysts, primarily based on their clinical relevance but also with knowledge of the strength of their statistical association with readmission outcome (Table 1). For each patient, the presence or absence of these conditions was assessed from multiple sources, including secondary diagnoses during the index admission, principal and secondary diagnoses from hospital admissions in the 12 months prior to the index admission, and diagnoses from hospital outpatient and physician encounters 12 months before the index admission. A small number of CCs were considered to represent potential complications of care (eg, bleeding). Because we did not want to adjust for complications of care occurring during the index admission, a patient was not considered to have one of these conditions unless it was also present in at least one encounter prior to the index admission.

Regression Model Variables and Results in Derivation Sample
VariableFrequenciesEstimateStandard ErrorOdds Ratio95% CI 
  • Abbreviations: CABG, coronary artery bypass graft; CC, condition category; CI, confidence interval; COPD, chronic obstructive pulmonary disease; DM, diabetes mellitus.

Intercept 2.3950.021   
Age 65 (years above 65, continuous) 0.00010.0011.0000.9981.001
Male450.0710.0121.0731.0481.099
History of CABG5.20.1790.0270.8360.7930.881
Metastatic cancer and acute leukemia (CC 7)4.30.1770.0291.1941.1281.263
Lung, upper digestive tract, and other severe cancers (CC 8)6.00.2560.0241.2921.2321.354
Diabetes and DM complications (CC 15‐20, 119, 120)360.0590.0121.0611.0361.087
Disorders of fluid/electrolyte/acid‐base (CC 22, 23)340.1490.0131.1601.1311.191
Iron deficiency and other/unspecified anemias and blood disease (CC 47)460.1180.0121.1261.0991.153
Other psychiatric disorders (CC 60)120.1080.0171.1141.0771.151
Cardio‐respiratory failure and shock (CC 79)160.1140.0161.1211.0871.156
Congestive heart failure (CC 80)390.1510.0141.1631.1331.194
Chronic atherosclerosis (CC 83, 84)470.0510.0131.0531.0271.079
Valvular and rheumatic heart disease (CC 86)230.0620.0141.0641.0361.093
Arrhythmias (CC 92, 93)380.1260.0131.1341.1071.163
Vascular or circulatory disease (CC 104‐106)380.0880.0121.0921.0661.119
COPD (CC 108)580.1860.0131.2051.1751.235
Fibrosis of lung and other chronic lung disorders (CC 109)170.0860.0151.0901.0591.122
Renal failure (CC 131)170.1470.0161.1581.1221.196
Protein‐calorie malnutrition (CC 21)7.90.1210.0201.1291.0861.173
History of infection (CC 1, 3‐6)350.0680.0121.0711.0451.097
Severe hematological disorders (CC 44)3.60.1170.0281.1251.0641.188
Decubitus ulcer or chronic skin ulcer (CC 148, 149)100.1010.0181.1061.0671.146
History of pneumonia (CC 111‐113)440.0650.0131.0671.0411.094
Vertebral fractures (CC 157)5.10.1130.0241.1201.0681.174
Other injuries (CC 162)320.0610.0121.0631.0381.089
Urinary tract infection (CC 135)260.0640.0141.0661.0381.095
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal, and other cancers and tumors (CC 9‐10)160.0500.0161.0511.0181.084
End‐stage renal disease or dialysis (CC 129, 130)1.90.1310.0371.1401.0601.226
Drug/alcohol abuse/dependence/psychosis (CC 51‐53)120.0810.0171.0841.0481.121
Septicemia/shock (CC 2)6.30.0940.0221.0981.0521.146
Other gastrointestinal disorders (CC 36)560.0730.0121.0761.0511.102
Acute coronary syndrome (CC 81, 82)8.30.1260.0191.1341.0921.178
Pleural effusion/pneumothorax (CC 114)120.0830.0171.0861.0511.123
Other urinary tract disorders (CC 136)240.0590.0141.0611.0331.090
Stroke (CC 95, 96)100.0470.0191.0491.0111.088
Dementia and senility (CC 49, 50)270.0310.0141.0311.0041.059
Hemiplegia, paraplegia, paralysis, functional disability (CC 67‐69, 100‐102, 177, 178)7.40.0680.0211.0701.0261.116
Other lung disorders (CC 115)450.0050.0121.0050.9821.030
Major psychiatric disorders (CC 54‐56)110.0380.0181.0381.0031.075
Asthma (CC 110)120.0060.0181.0060.9721.041

Model Derivation

For the development of the administrative claims model, we randomly sampled half of 2006 hospitalizations that met inclusion criteria. To assess model performance at the patient level, we calculated the area under the receiver operating curve (AUC), and calculated observed readmission rates in the lowest and highest deciles on the basis of predicted readmission probabilities. We also compared performance with a null model, a model that adjusted for age and sex, and a model that included all candidate variables.20

Risk‐Standardized Readmission Rates

Using hierarchical logistic regression, we modeled the log‐odds of readmission within 30 days of discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics, and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation, or clustering, of observed outcomes, and models the assumption that underlying differences in quality among hospitals being evaluated lead to systematic differences in outcomes. We then calculated hospital‐specific readmission rates as the ratio of predicted‐to‐expected readmissions (similar to observed/expected ratio), multiplied by the national unadjusted ratea form of indirect standardization. Predicted number of readmissions in each hospital is estimated given the same patient mix and its estimated hospital‐specific intercept. Expected number of readmissions in each hospital is estimated using its patient mix and the average hospital‐specific intercept. To assess hospital performance in any given year, we re‐estimate model coefficients using that year's data.

Model Validation: Administrative Claims

We compared the model performance in the development sample with its performance in the sample from the 2006 data that was not selected for the development set, and separately among pneumonia admissions in 2005. The model was recalibrated in each validation set.

Model Validation: Medical Record Abstraction

We developed a separate medical record‐based model of readmission risk using information from charts that had previously been abstracted as part of CMS's National Pneumonia Project. To select variables for this model, the clinician team: 1) reviewed the list of variables that were included in a medical record model that was previously developed for validating the National Quality Forum‐approved pneumonia mortality measure; 2) reviewed a list of other potential candidate variables available in the National Pneumonia Project dataset; and 3) reviewed variables that emerged as potentially important predictors of readmission, based on a systematic review of the literature that was conducted as part of measure development. This selection process resulted in a final medical record model that included 35 variables.

We linked patients in the National Pneumonia Project cohort to their Medicare claims data, including claims from one year before the index hospitalization, so that we could calculate risk‐standardized readmission rates in this cohort separately using medical record and claims‐based models. This analysis was conducted at the state level, for the 50 states plus the District of Columbia and Puerto Rico, because medical record data were unavailable in sufficient numbers to permit hospital‐level comparisons. To examine the relationship between risk‐standardized rates obtained from medical record and administrative data models, we estimated a linear regression model describing the association between the two rates, weighting each state by number of index hospitalizations, and calculated the correlation coefficient and the intercept and slope of this equation. A slope close to 1 and an intercept close to 0 would provide evidence that risk‐standardized state readmission rates from the medical record and claims models were similar. We also calculated the difference between state risk‐standardized readmission rates from the two models.

Analyses were conducted with the use of SAS version 9.1.3 (SAS Institute Inc, Cary, NC). Models were fitted separately for the National Pneumonia Project and 2006 cohort. We estimated the hierarchical models using the GLIMMIX procedure in SAS. The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation and Performance

After exclusions were applied, the 2006 sample included 453,251 pneumonia hospitalizations (Figure 1). The development sample consisted of 226,545 hospitalizations at 4675 hospitals, with an overall unadjusted 30‐day readmission rate of 17.4%. In 11,694 index cases (5.2%), the patient died within 30 days without being readmitted. Median readmission rate was 16.3%, 25th and 75th percentile rates were 11.1% and 21.3%, and at the 10th and 90th percentile, hospital readmission rates ranged from 4.6% to 26.7% (Figure 2).

Figure 1
Pneumonia admissions included in measure calculation.
Figure 2
Distribution of unadjusted readmission rates.

The claims model included 39 variables (age, sex, and 37 clinical variables) (Table 1). The mean age of the cohort was 80.0 years, with 55.5% women and 11.1% nonwhite patients. Mean observed readmission rate in the development sample ranged from 9% in the lowest decile of predicted pneumonia readmission rates to 32% in the highest predicted decile, a range of 23%. The AUC was 0.63. For comparison, a model with only age and sex had an AUC of 0.51, and a model with all candidate variables had an AUC equal to 0.63 (Table 2).

Readmission Model Performance of Administrative Claims Models
 Calibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • NOTE: Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted Probabilities (p) = 1/(1+exp{Xb}), and Z = Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample; eg, Logit(P(Y = 1|Z)) = 0 + 1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Development sample
2006(1st half) N = 226,545(0, 1)(0.09, 0.32)0.63082.627.399.996,843 (40)
Validation sample
2006(2nd half) N = 226,706(0.002, 0.997)(0.09, 0.31)0.63082.557.459.996,870 (40)
2005N = 536,015(0.035, 1.008)(0.08, 0.31)0.63082.677.3110.0316,241 (40)

Hospital Risk‐Standardized Readmission Rates

Risk‐standardized readmission rates varied across hospitals (Figure 3). Median risk‐standardized readmission rate was 17.3%, and the 25th and 75th percentiles were 16.9% and 17.9%, respectively. The 5th percentile was 16.0% and the 95th percentile was 19.1%. Odds of readmission for a hospital one standard deviation above average was 1.4 times that of a hospital one standard deviation below average.

Figure 3
Distribution of risk‐standardized readmission rates.

Administrative Model Validation

In the remaining 50% of pneumonia index hospitalizations from 2006, and the entire 2005 cohort, regression coefficients and standard errors of model variables were similar to those in the development data set. Model performance using 2005 data was consistent with model performance using the 2006 development and validation half‐samples (Table 2).

Medical Record Validation

After exclusions, the medical record sample taken from the National Pneumonia Project included 47,429 cases, with an unadjusted 30‐day readmission rate of 17.0%. The final medical record risk‐adjustment model included a total of 35 variables, whose prevalence and association with readmission risk varied modestly (Table 3). Performance of the medical record and administrative models was similar (areas under the ROC curve 0.59 and 0.63, respectively) (Table 4). Additionally, in the administrative model, predicted readmission rates ranged from 8% in the lowest predicted decile to 30% in the highest predicted decile, while in the medical record model, the corresponding rates varied from 10% to 26%.

Regression Model Results from Medical Record Sample
VariablePercentEstimateStandard ErrorOdds Ratio95% CI
  • NOTE: Between‐state variance = 0.024; standard error = 0.00.

  • Abbreviations: BP, blood pressure; BUN, blood urea nitrogen; CI, confidence interval; SD, standard deviation; WBC, white blood cell count.

Age 65, mean (SD)15.24 (7.87)0.0030.0020.9970.9931.000
Male46.180.1220.0251.1301.0751.188
Nursing home resident17.710.0350.0371.0360.9631.114
Neoplastic disease6.800.1300.0491.1391.0341.254
Liver disease1.040.0890.1230.9150.7191.164
History of heart failure28.980.2340.0291.2641.1941.339
History of renal disease8.510.1880.0471.2061.1001.323
Altered mental status17.950.0090.0341.0090.9441.080
Pleural effusion21.200.1650.0301.1791.1111.251
BUN 30 mg/dl23.280.1600.0331.1741.1001.252
BUN missing14.560.1010.1850.9040.6301.298
Systolic BP <90 mmHg2.950.0680.0701.0700.9321.228
Systolic BP missing11.210.1490.4251.1600.5042.669
Pulse 125/min7.730.0360.0471.0360.9451.137
Pulse missing11.220.2100.4051.2340.5582.729
Respiratory rate 30/min16.380.0790.0341.0821.0121.157
Respiratory rate missing11.390.2040.2401.2260.7651.964
Sodium <130 mmol/L4.820.1360.0571.1451.0251.280
Sodium missing14.390.0490.1431.0500.7931.391
Glucose 250 mg/dl5.190.0050.0570.9950.8891.114
Glucose missing15.440.1560.1050.8550.6961.051
Hematocrit <30%7.770.2700.0441.3101.2021.428
Hematocrit missing13.620.0710.1350.9320.7151.215
Creatinine 2.5 mg/dL4.680.1090.0621.1150.9891.258
Creatinine missing14.630.2000.1671.2210.8801.695
WBC 6‐12 b/L38.040.0210.0490.9790.8891.079
WBC >12 b/L41.450.0680.0490.9340.8481.029
WBC missing12.850.1670.1621.1810.8601.623
Immunosuppressive therapy15.010.3470.0351.4151.3211.516
Chronic lung disease42.160.1370.0281.1471.0861.211
Coronary artery disease39.570.1500.0281.1621.1001.227
Diabetes mellitus20.900.1370.0331.1471.0761.223
Alcohol/drug abuse3.400.0990.0710.9060.7881.041
Dementia/Alzheimer's disease16.380.1250.0381.1331.0521.222
Splenectomy0.440.0160.1861.0160.7061.463
Model Performance of Medical Record Model
ModelCalibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Medical Record Model Development Sample (NP)
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.10, 0.26)0.59083.045.2811.68710 (35)
Linked Administrative Model Validation Sample
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.08, 0.30)0.63083.046.9410.011,414 (40)

The correlation coefficient of the estimated state‐specific standardized readmission rates from the administrative and medical record models was 0.96, and the proportion of the variance explained by the model was 0.92 (Figure 4).

Figure 4
Comparison of state‐level risk‐standardized readmission rates from medical record and administrative models. Abbreviations: HGLM, hierarchical generalized linear models.

DISCUSSION

We have described the development, validation, and results of a hospital, 30‐day, risk‐standardized readmission model for pneumonia that was created to support current federal transparency initiatives. The model uses administrative claims data from Medicare fee‐for‐service patients and produces results that are comparable to a model based on information obtained through manual abstraction of medical records. We observed an overall 30‐day readmission rate of 17%, and our analyses revealed substantial variation across US hospitals, suggesting that improvement by lower performing institutions is an achievable goal.

Because more than one in six pneumonia patients are rehospitalized shortly after discharge, and because pneumonia hospitalizations represent an enormous expense to the Medicare program, prevention of readmissions is now widely recognized to offer a substantial opportunity to improve patient outcomes while simultaneously lowering health care costs. Accordingly, promotion of strategies to reduce readmission rates has become a key priority for payers and quality‐improvement organizations. These range from policy‐level attempts to stimulate change, such as publicly reporting hospital readmission rates on government websites, to establishing accreditation standardssuch as the Joint Commission's requirement to accurately reconcile medications, to the creation of quality improvement collaboratives focused on sharing best practices across institutions. Regardless of the approach taken, a valid, risk‐adjusted measure of performance is required to evaluate and track performance over time. The measure we have described meets the National Quality Forum's measure evaluation criteria in that it addresses an important clinical topic for which there appears to be significant opportunities for improvement, the measure is precisely defined and has been subjected to validity and reliability testing, it is risk‐adjusted based on patient clinical factors present at the start of care, is feasible to produce, and is understandable by a broad range of potential users.21 Because hospitalists are the physicians primarily responsible for the care of patients with pneumonia at US hospitals, and because they frequently serve as the physician champions for quality improvement activities related to pneumonia, it is especially important that they maintain a thorough understanding of the measures and methodologies underlying current efforts to measure hospital performance.

Several features of our approach warrant additional comment. First, we deliberately chose to measure all readmission events rather than attempt to discriminate between potentially preventable and nonpreventable readmissions. From the patient perspective, readmission for any reason is a concern, and limiting the measure to pneumonia‐related readmissions could make it susceptible to gaming by hospitals. Moreover, determining whether a readmission is related to a potential quality problem is not straightforward. For example, a patient with pneumonia whose discharge medications were prescribed incorrectly may be readmitted with a hip fracture following an episode of syncope. It would be inappropriate to treat this readmission as unrelated to the care the patient received for pneumonia. Additionally, while our approach does not presume that every readmission is preventable, the goal is to reduce the risk of readmissions generally (not just in narrowly defined subpopulations), and successful interventions to reduce rehospitalization have typically demonstrated reductions in all‐cause readmission.9, 22 Second, deaths that occurred within 30 days of discharge, yet that were not accompanied by a hospital readmission, were not counted as a readmission outcome. While it may seem inappropriate to treat a postdischarge death as a nonevent (rather than censoring or excluding such cases), alternative analytic approaches, such as using a hierarchical survival model, are not currently computationally feasible with large national data sets. Fortunately, only a relatively small proportion of discharges fell into this category (5.2% of index cases in the 2006 development sample died within 30 days of discharge without being readmitted). An alternative approach to handling the competing outcome of death would have been to use a composite outcome of readmission or death. However, we believe that it is important to report the outcomes separately because factors that predict readmission and mortality may differ, and when making comparisons across hospitals it would not be possible to determine whether differences in rate were due to readmission or mortality. Third, while the patient‐level readmission model showed only modest discrimination, we intentionally excluded covariates such as race and socioeconomic status, as well as in‐hospital events and potential complications of care, and whether patients were discharged home or to a skilled nursing facility. While these variables could have improved predictive ability, they may be directly or indirectly related to quality or supply factors that should not be included in a model that seeks to control for patient clinical characteristics. For example, if hospitals with a large share of poor patients have higher readmission rates, then including income in the model will obscure differences that are important to identify. While we believe that the decision to exclude such factors in the model is in the best interest of patients, and supports efforts to reduce health inequality in society more generally, we also recognize that hospitals that care for a disproportionate share of poor patients are likely to require additional resources to overcome these social factors. Fourth, we limited the analysis to patients with a principal diagnosis of pneumonia, and chose not to also include those with a principal diagnosis of sepsis or respiratory failure coupled with a secondary diagnosis of pneumonia. While the broader definition is used by CMS in the National Pneumonia Project, that initiative relied on chart abstraction to differentiate pneumonia present at the time of admission from cases developing as a complication of hospitalization. Additionally, we did not attempt to differentiate between community‐acquired and healthcare‐associated pneumonia, however our approach is consistent with the National Pneumonia Project and Pneumonia Patient Outcomes Research Team.18 Fifth, while our model estimates readmission rates at the hospital level, we recognize that readmissions are influenced by a complex and extensive range of factors. In this context, greater cooperation between hospitals and other care providers will almost certainly be required in order to achieve dramatic improvement in readmission rates, which in turn will depend upon changes to the way serious illness is paid for. Some options that have recently been described include imposing financial penalties for early readmission, extending the boundaries of case‐based payment beyond hospital discharge, and bundling payments between hospitals and physicians.2325

Our measure has several limitations. First, our models were developed and validated using Medicare data, and the results may not apply to pneumonia patients less than 65 years of age. However, most patients hospitalized with pneumonia in the US are 65 or older. In addition, we were unable to test the model with a Medicare managed care population, because data are not currently available on such patients. Finally, the medical record‐based validation was conducted by state‐level analysis because the sample size was insufficient to carry this out at the hospital level.

In conclusion, more than 17% of Medicare beneficiaries are readmitted within 30 days following discharge after a hospitalization for pneumonia, and rates vary substantially across institutions. The development of a valid measure of hospital performance and public reporting are important first steps towards focusing attention on this problem. Actual improvement will now depend on whether hospitals and partner organizations are successful at identifying and implementing effective methods to prevent readmission.

Hospital readmissions are emblematic of the numerous challenges facing the US health care system. Despite high levels of spending, nearly 20% of Medicare beneficiaries are readmitted within 30 days of hospital discharge, many readmissions are considered preventable, and rates vary widely by hospital and region.1 Further, while readmissions have been estimated to cost taxpayers as much as $17 billion annually, the current fee‐for‐service method of paying for the acute care needs of seniors rewards hospitals financially for readmission, not their prevention.2

Pneumonia is the second most common reason for hospitalization among Medicare beneficiaries, accounting for approximately 650,000 admissions annually,3 and has been a focus of national quality‐improvement efforts for more than a decade.4, 5 Despite improvements in key processes of care, rates of readmission within 30 days of discharge following a hospitalization for pneumonia have been reported to vary from 10% to 24%.68 Among several factors, readmissions are believed to be influenced by the quality of both inpatient and outpatient care, and by care‐coordination activities occurring in the transition from inpatient to outpatient status.912

Public reporting of hospital performance is considered a key strategy for improving quality, reducing costs, and increasing the value of hospital care, both in the US and worldwide.13 In 2009, the Centers for Medicare & Medicaid Services (CMS) expanded its reporting initiatives by adding risk‐adjusted hospital readmission rates for acute myocardial infarction, heart failure, and pneumonia to the Hospital Compare website.14, 15 Readmission rates are an attractive focus for public reporting for several reasons. First, in contrast to most process‐based measures of quality (eg, whether a patient with pneumonia received a particular antibiotic), a readmission is an adverse outcome that matters to patients and families.16 Second, unlike process measures whose assessment requires detailed review of medical records, readmissions can be easily determined from standard hospital claims. Finally, readmissions are costly, and their prevention could yield substantial savings to society.

A necessary prerequisite for public reporting of readmission is a validated, risk‐adjusted measure that can be used to track performance over time and can facilitate comparisons across institutions. Toward this end, we describe the development, validation, and results of a National Quality Forum‐approved and CMS‐adopted model to estimate hospital‐specific, risk‐standardized, 30‐day readmission rates for Medicare patients hospitalized with pneumonia.17

METHODS

Data Sources

We used 20052006 claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files to develop and validate the administrative model. The Medicare Enrollment Database was used to determine Medicare fee‐for‐service enrollment and mortality statuses. A medical record model, used for additional validation of the administrative model, was developed using information abstracted from the charts of 75,616 pneumonia cases from 19982001 as part of the National Pneumonia Project, a CMS quality improvement initiative.18

Study Cohort

We identified hospitalizations of patients 65 years of age and older with a principal diagnosis of pneumonia (International Classification of Diseases, 9th Revision, Clinical Modification codes 480.XX, 481, 482.XX, 483.X, 485, 486, 487.0) as potential index pneumonia admissions. Because our focus was readmission for patients discharged from acute care settings, we excluded admissions in which patients died or were transferred to another acute care facility. Additionally, we restricted analysis to patients who had been enrolled in fee‐for‐service Medicare Parts A and B, for at least 12 months prior to their pneumonia hospitalization, so that we could use diagnostic codes from all inpatient and outpatient encounters during that period to enhance identification of comorbidities.

Outcome

The outcome was 30‐day readmission, defined as occurrence of at least one hospitalization for any cause within 30 days of discharge after an index admission. Readmissions were identified from hospital claims data, and were attributed to the hospital that had discharged the patient. A 30‐day time frame was selected because it is a clinically meaningful period during which hospitals can be expected to collaborate with other organizations and providers to implement measures to reduce the risk of rehospitalization.

Candidate and Final Model Variables

Candidate variables for the administrative claims model were selected by a clinician team from 189 diagnostic groups included in the Hierarchical Condition Category (HCC) clinical classification system.19 The HCC clinical classification system was developed for CMS in preparation for all‐encounter risk adjustment for Medicare Advantage (managed care). Under the HCC algorithm, the 15,000+ ICD‐9‐CM diagnosis codes are assigned to one of 189 clinically‐coherent condition categories (CCs). We used the April 2008 version of the ICD‐9‐CM to CC assignment map, which is maintained by CMS and posted at http://www.qualitynet.org. A total of 154 CCs were considered to be potentially relevant to readmission outcome and were included for further consideration. Some CCs were further combined into clinically coherent groupings of CCs. Our set of candidate variables ultimately included 97 CC‐based variables, two demographic variables (age and sex), and two procedure codes potentially relevant to readmission risk (history of percutaneous coronary intervention [PCI] and history of coronary artery bypass graft [CABG]).

The final risk‐adjustment model included 39 variables selected by the team of clinicians and analysts, primarily based on their clinical relevance but also with knowledge of the strength of their statistical association with readmission outcome (Table 1). For each patient, the presence or absence of these conditions was assessed from multiple sources, including secondary diagnoses during the index admission, principal and secondary diagnoses from hospital admissions in the 12 months prior to the index admission, and diagnoses from hospital outpatient and physician encounters 12 months before the index admission. A small number of CCs were considered to represent potential complications of care (eg, bleeding). Because we did not want to adjust for complications of care occurring during the index admission, a patient was not considered to have one of these conditions unless it was also present in at least one encounter prior to the index admission.

Regression Model Variables and Results in Derivation Sample
VariableFrequenciesEstimateStandard ErrorOdds Ratio95% CI 
  • Abbreviations: CABG, coronary artery bypass graft; CC, condition category; CI, confidence interval; COPD, chronic obstructive pulmonary disease; DM, diabetes mellitus.

Intercept 2.3950.021   
Age 65 (years above 65, continuous) 0.00010.0011.0000.9981.001
Male450.0710.0121.0731.0481.099
History of CABG5.20.1790.0270.8360.7930.881
Metastatic cancer and acute leukemia (CC 7)4.30.1770.0291.1941.1281.263
Lung, upper digestive tract, and other severe cancers (CC 8)6.00.2560.0241.2921.2321.354
Diabetes and DM complications (CC 15‐20, 119, 120)360.0590.0121.0611.0361.087
Disorders of fluid/electrolyte/acid‐base (CC 22, 23)340.1490.0131.1601.1311.191
Iron deficiency and other/unspecified anemias and blood disease (CC 47)460.1180.0121.1261.0991.153
Other psychiatric disorders (CC 60)120.1080.0171.1141.0771.151
Cardio‐respiratory failure and shock (CC 79)160.1140.0161.1211.0871.156
Congestive heart failure (CC 80)390.1510.0141.1631.1331.194
Chronic atherosclerosis (CC 83, 84)470.0510.0131.0531.0271.079
Valvular and rheumatic heart disease (CC 86)230.0620.0141.0641.0361.093
Arrhythmias (CC 92, 93)380.1260.0131.1341.1071.163
Vascular or circulatory disease (CC 104‐106)380.0880.0121.0921.0661.119
COPD (CC 108)580.1860.0131.2051.1751.235
Fibrosis of lung and other chronic lung disorders (CC 109)170.0860.0151.0901.0591.122
Renal failure (CC 131)170.1470.0161.1581.1221.196
Protein‐calorie malnutrition (CC 21)7.90.1210.0201.1291.0861.173
History of infection (CC 1, 3‐6)350.0680.0121.0711.0451.097
Severe hematological disorders (CC 44)3.60.1170.0281.1251.0641.188
Decubitus ulcer or chronic skin ulcer (CC 148, 149)100.1010.0181.1061.0671.146
History of pneumonia (CC 111‐113)440.0650.0131.0671.0411.094
Vertebral fractures (CC 157)5.10.1130.0241.1201.0681.174
Other injuries (CC 162)320.0610.0121.0631.0381.089
Urinary tract infection (CC 135)260.0640.0141.0661.0381.095
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal, and other cancers and tumors (CC 9‐10)160.0500.0161.0511.0181.084
End‐stage renal disease or dialysis (CC 129, 130)1.90.1310.0371.1401.0601.226
Drug/alcohol abuse/dependence/psychosis (CC 51‐53)120.0810.0171.0841.0481.121
Septicemia/shock (CC 2)6.30.0940.0221.0981.0521.146
Other gastrointestinal disorders (CC 36)560.0730.0121.0761.0511.102
Acute coronary syndrome (CC 81, 82)8.30.1260.0191.1341.0921.178
Pleural effusion/pneumothorax (CC 114)120.0830.0171.0861.0511.123
Other urinary tract disorders (CC 136)240.0590.0141.0611.0331.090
Stroke (CC 95, 96)100.0470.0191.0491.0111.088
Dementia and senility (CC 49, 50)270.0310.0141.0311.0041.059
Hemiplegia, paraplegia, paralysis, functional disability (CC 67‐69, 100‐102, 177, 178)7.40.0680.0211.0701.0261.116
Other lung disorders (CC 115)450.0050.0121.0050.9821.030
Major psychiatric disorders (CC 54‐56)110.0380.0181.0381.0031.075
Asthma (CC 110)120.0060.0181.0060.9721.041

Model Derivation

For the development of the administrative claims model, we randomly sampled half of 2006 hospitalizations that met inclusion criteria. To assess model performance at the patient level, we calculated the area under the receiver operating curve (AUC), and calculated observed readmission rates in the lowest and highest deciles on the basis of predicted readmission probabilities. We also compared performance with a null model, a model that adjusted for age and sex, and a model that included all candidate variables.20

Risk‐Standardized Readmission Rates

Using hierarchical logistic regression, we modeled the log‐odds of readmission within 30 days of discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics, and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation, or clustering, of observed outcomes, and models the assumption that underlying differences in quality among hospitals being evaluated lead to systematic differences in outcomes. We then calculated hospital‐specific readmission rates as the ratio of predicted‐to‐expected readmissions (similar to observed/expected ratio), multiplied by the national unadjusted ratea form of indirect standardization. Predicted number of readmissions in each hospital is estimated given the same patient mix and its estimated hospital‐specific intercept. Expected number of readmissions in each hospital is estimated using its patient mix and the average hospital‐specific intercept. To assess hospital performance in any given year, we re‐estimate model coefficients using that year's data.

Model Validation: Administrative Claims

We compared the model performance in the development sample with its performance in the sample from the 2006 data that was not selected for the development set, and separately among pneumonia admissions in 2005. The model was recalibrated in each validation set.

Model Validation: Medical Record Abstraction

We developed a separate medical record‐based model of readmission risk using information from charts that had previously been abstracted as part of CMS's National Pneumonia Project. To select variables for this model, the clinician team: 1) reviewed the list of variables that were included in a medical record model that was previously developed for validating the National Quality Forum‐approved pneumonia mortality measure; 2) reviewed a list of other potential candidate variables available in the National Pneumonia Project dataset; and 3) reviewed variables that emerged as potentially important predictors of readmission, based on a systematic review of the literature that was conducted as part of measure development. This selection process resulted in a final medical record model that included 35 variables.

We linked patients in the National Pneumonia Project cohort to their Medicare claims data, including claims from one year before the index hospitalization, so that we could calculate risk‐standardized readmission rates in this cohort separately using medical record and claims‐based models. This analysis was conducted at the state level, for the 50 states plus the District of Columbia and Puerto Rico, because medical record data were unavailable in sufficient numbers to permit hospital‐level comparisons. To examine the relationship between risk‐standardized rates obtained from medical record and administrative data models, we estimated a linear regression model describing the association between the two rates, weighting each state by number of index hospitalizations, and calculated the correlation coefficient and the intercept and slope of this equation. A slope close to 1 and an intercept close to 0 would provide evidence that risk‐standardized state readmission rates from the medical record and claims models were similar. We also calculated the difference between state risk‐standardized readmission rates from the two models.

Analyses were conducted with the use of SAS version 9.1.3 (SAS Institute Inc, Cary, NC). Models were fitted separately for the National Pneumonia Project and 2006 cohort. We estimated the hierarchical models using the GLIMMIX procedure in SAS. The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation and Performance

After exclusions were applied, the 2006 sample included 453,251 pneumonia hospitalizations (Figure 1). The development sample consisted of 226,545 hospitalizations at 4675 hospitals, with an overall unadjusted 30‐day readmission rate of 17.4%. In 11,694 index cases (5.2%), the patient died within 30 days without being readmitted. Median readmission rate was 16.3%, 25th and 75th percentile rates were 11.1% and 21.3%, and at the 10th and 90th percentile, hospital readmission rates ranged from 4.6% to 26.7% (Figure 2).

Figure 1
Pneumonia admissions included in measure calculation.
Figure 2
Distribution of unadjusted readmission rates.

The claims model included 39 variables (age, sex, and 37 clinical variables) (Table 1). The mean age of the cohort was 80.0 years, with 55.5% women and 11.1% nonwhite patients. Mean observed readmission rate in the development sample ranged from 9% in the lowest decile of predicted pneumonia readmission rates to 32% in the highest predicted decile, a range of 23%. The AUC was 0.63. For comparison, a model with only age and sex had an AUC of 0.51, and a model with all candidate variables had an AUC equal to 0.63 (Table 2).

Readmission Model Performance of Administrative Claims Models
 Calibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • NOTE: Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted Probabilities (p) = 1/(1+exp{Xb}), and Z = Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample; eg, Logit(P(Y = 1|Z)) = 0 + 1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Development sample
2006(1st half) N = 226,545(0, 1)(0.09, 0.32)0.63082.627.399.996,843 (40)
Validation sample
2006(2nd half) N = 226,706(0.002, 0.997)(0.09, 0.31)0.63082.557.459.996,870 (40)
2005N = 536,015(0.035, 1.008)(0.08, 0.31)0.63082.677.3110.0316,241 (40)

Hospital Risk‐Standardized Readmission Rates

Risk‐standardized readmission rates varied across hospitals (Figure 3). Median risk‐standardized readmission rate was 17.3%, and the 25th and 75th percentiles were 16.9% and 17.9%, respectively. The 5th percentile was 16.0% and the 95th percentile was 19.1%. Odds of readmission for a hospital one standard deviation above average was 1.4 times that of a hospital one standard deviation below average.

Figure 3
Distribution of risk‐standardized readmission rates.

Administrative Model Validation

In the remaining 50% of pneumonia index hospitalizations from 2006, and the entire 2005 cohort, regression coefficients and standard errors of model variables were similar to those in the development data set. Model performance using 2005 data was consistent with model performance using the 2006 development and validation half‐samples (Table 2).

Medical Record Validation

After exclusions, the medical record sample taken from the National Pneumonia Project included 47,429 cases, with an unadjusted 30‐day readmission rate of 17.0%. The final medical record risk‐adjustment model included a total of 35 variables, whose prevalence and association with readmission risk varied modestly (Table 3). Performance of the medical record and administrative models was similar (areas under the ROC curve 0.59 and 0.63, respectively) (Table 4). Additionally, in the administrative model, predicted readmission rates ranged from 8% in the lowest predicted decile to 30% in the highest predicted decile, while in the medical record model, the corresponding rates varied from 10% to 26%.

Regression Model Results from Medical Record Sample
VariablePercentEstimateStandard ErrorOdds Ratio95% CI
  • NOTE: Between‐state variance = 0.024; standard error = 0.00.

  • Abbreviations: BP, blood pressure; BUN, blood urea nitrogen; CI, confidence interval; SD, standard deviation; WBC, white blood cell count.

Age 65, mean (SD)15.24 (7.87)0.0030.0020.9970.9931.000
Male46.180.1220.0251.1301.0751.188
Nursing home resident17.710.0350.0371.0360.9631.114
Neoplastic disease6.800.1300.0491.1391.0341.254
Liver disease1.040.0890.1230.9150.7191.164
History of heart failure28.980.2340.0291.2641.1941.339
History of renal disease8.510.1880.0471.2061.1001.323
Altered mental status17.950.0090.0341.0090.9441.080
Pleural effusion21.200.1650.0301.1791.1111.251
BUN 30 mg/dl23.280.1600.0331.1741.1001.252
BUN missing14.560.1010.1850.9040.6301.298
Systolic BP <90 mmHg2.950.0680.0701.0700.9321.228
Systolic BP missing11.210.1490.4251.1600.5042.669
Pulse 125/min7.730.0360.0471.0360.9451.137
Pulse missing11.220.2100.4051.2340.5582.729
Respiratory rate 30/min16.380.0790.0341.0821.0121.157
Respiratory rate missing11.390.2040.2401.2260.7651.964
Sodium <130 mmol/L4.820.1360.0571.1451.0251.280
Sodium missing14.390.0490.1431.0500.7931.391
Glucose 250 mg/dl5.190.0050.0570.9950.8891.114
Glucose missing15.440.1560.1050.8550.6961.051
Hematocrit <30%7.770.2700.0441.3101.2021.428
Hematocrit missing13.620.0710.1350.9320.7151.215
Creatinine 2.5 mg/dL4.680.1090.0621.1150.9891.258
Creatinine missing14.630.2000.1671.2210.8801.695
WBC 6‐12 b/L38.040.0210.0490.9790.8891.079
WBC >12 b/L41.450.0680.0490.9340.8481.029
WBC missing12.850.1670.1621.1810.8601.623
Immunosuppressive therapy15.010.3470.0351.4151.3211.516
Chronic lung disease42.160.1370.0281.1471.0861.211
Coronary artery disease39.570.1500.0281.1621.1001.227
Diabetes mellitus20.900.1370.0331.1471.0761.223
Alcohol/drug abuse3.400.0990.0710.9060.7881.041
Dementia/Alzheimer's disease16.380.1250.0381.1331.0521.222
Splenectomy0.440.0160.1861.0160.7061.463
Model Performance of Medical Record Model
ModelCalibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Medical Record Model Development Sample (NP)
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.10, 0.26)0.59083.045.2811.68710 (35)
Linked Administrative Model Validation Sample
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.08, 0.30)0.63083.046.9410.011,414 (40)

The correlation coefficient of the estimated state‐specific standardized readmission rates from the administrative and medical record models was 0.96, and the proportion of the variance explained by the model was 0.92 (Figure 4).

Figure 4
Comparison of state‐level risk‐standardized readmission rates from medical record and administrative models. Abbreviations: HGLM, hierarchical generalized linear models.

DISCUSSION

We have described the development, validation, and results of a hospital, 30‐day, risk‐standardized readmission model for pneumonia that was created to support current federal transparency initiatives. The model uses administrative claims data from Medicare fee‐for‐service patients and produces results that are comparable to a model based on information obtained through manual abstraction of medical records. We observed an overall 30‐day readmission rate of 17%, and our analyses revealed substantial variation across US hospitals, suggesting that improvement by lower performing institutions is an achievable goal.

Because more than one in six pneumonia patients are rehospitalized shortly after discharge, and because pneumonia hospitalizations represent an enormous expense to the Medicare program, prevention of readmissions is now widely recognized to offer a substantial opportunity to improve patient outcomes while simultaneously lowering health care costs. Accordingly, promotion of strategies to reduce readmission rates has become a key priority for payers and quality‐improvement organizations. These range from policy‐level attempts to stimulate change, such as publicly reporting hospital readmission rates on government websites, to establishing accreditation standardssuch as the Joint Commission's requirement to accurately reconcile medications, to the creation of quality improvement collaboratives focused on sharing best practices across institutions. Regardless of the approach taken, a valid, risk‐adjusted measure of performance is required to evaluate and track performance over time. The measure we have described meets the National Quality Forum's measure evaluation criteria in that it addresses an important clinical topic for which there appears to be significant opportunities for improvement, the measure is precisely defined and has been subjected to validity and reliability testing, it is risk‐adjusted based on patient clinical factors present at the start of care, is feasible to produce, and is understandable by a broad range of potential users.21 Because hospitalists are the physicians primarily responsible for the care of patients with pneumonia at US hospitals, and because they frequently serve as the physician champions for quality improvement activities related to pneumonia, it is especially important that they maintain a thorough understanding of the measures and methodologies underlying current efforts to measure hospital performance.

Several features of our approach warrant additional comment. First, we deliberately chose to measure all readmission events rather than attempt to discriminate between potentially preventable and nonpreventable readmissions. From the patient perspective, readmission for any reason is a concern, and limiting the measure to pneumonia‐related readmissions could make it susceptible to gaming by hospitals. Moreover, determining whether a readmission is related to a potential quality problem is not straightforward. For example, a patient with pneumonia whose discharge medications were prescribed incorrectly may be readmitted with a hip fracture following an episode of syncope. It would be inappropriate to treat this readmission as unrelated to the care the patient received for pneumonia. Additionally, while our approach does not presume that every readmission is preventable, the goal is to reduce the risk of readmissions generally (not just in narrowly defined subpopulations), and successful interventions to reduce rehospitalization have typically demonstrated reductions in all‐cause readmission.9, 22 Second, deaths that occurred within 30 days of discharge, yet that were not accompanied by a hospital readmission, were not counted as a readmission outcome. While it may seem inappropriate to treat a postdischarge death as a nonevent (rather than censoring or excluding such cases), alternative analytic approaches, such as using a hierarchical survival model, are not currently computationally feasible with large national data sets. Fortunately, only a relatively small proportion of discharges fell into this category (5.2% of index cases in the 2006 development sample died within 30 days of discharge without being readmitted). An alternative approach to handling the competing outcome of death would have been to use a composite outcome of readmission or death. However, we believe that it is important to report the outcomes separately because factors that predict readmission and mortality may differ, and when making comparisons across hospitals it would not be possible to determine whether differences in rate were due to readmission or mortality. Third, while the patient‐level readmission model showed only modest discrimination, we intentionally excluded covariates such as race and socioeconomic status, as well as in‐hospital events and potential complications of care, and whether patients were discharged home or to a skilled nursing facility. While these variables could have improved predictive ability, they may be directly or indirectly related to quality or supply factors that should not be included in a model that seeks to control for patient clinical characteristics. For example, if hospitals with a large share of poor patients have higher readmission rates, then including income in the model will obscure differences that are important to identify. While we believe that the decision to exclude such factors in the model is in the best interest of patients, and supports efforts to reduce health inequality in society more generally, we also recognize that hospitals that care for a disproportionate share of poor patients are likely to require additional resources to overcome these social factors. Fourth, we limited the analysis to patients with a principal diagnosis of pneumonia, and chose not to also include those with a principal diagnosis of sepsis or respiratory failure coupled with a secondary diagnosis of pneumonia. While the broader definition is used by CMS in the National Pneumonia Project, that initiative relied on chart abstraction to differentiate pneumonia present at the time of admission from cases developing as a complication of hospitalization. Additionally, we did not attempt to differentiate between community‐acquired and healthcare‐associated pneumonia, however our approach is consistent with the National Pneumonia Project and Pneumonia Patient Outcomes Research Team.18 Fifth, while our model estimates readmission rates at the hospital level, we recognize that readmissions are influenced by a complex and extensive range of factors. In this context, greater cooperation between hospitals and other care providers will almost certainly be required in order to achieve dramatic improvement in readmission rates, which in turn will depend upon changes to the way serious illness is paid for. Some options that have recently been described include imposing financial penalties for early readmission, extending the boundaries of case‐based payment beyond hospital discharge, and bundling payments between hospitals and physicians.2325

Our measure has several limitations. First, our models were developed and validated using Medicare data, and the results may not apply to pneumonia patients less than 65 years of age. However, most patients hospitalized with pneumonia in the US are 65 or older. In addition, we were unable to test the model with a Medicare managed care population, because data are not currently available on such patients. Finally, the medical record‐based validation was conducted by state‐level analysis because the sample size was insufficient to carry this out at the hospital level.

In conclusion, more than 17% of Medicare beneficiaries are readmitted within 30 days following discharge after a hospitalization for pneumonia, and rates vary substantially across institutions. The development of a valid measure of hospital performance and public reporting are important first steps towards focusing attention on this problem. Actual improvement will now depend on whether hospitals and partner organizations are successful at identifying and implementing effective methods to prevent readmission.

References
  1. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
  2. Medicare Payment Advisory Commission.Report to the Congress: Promoting Greater Efficiency in Medicare.2007.
  3. Levit K,Wier L,Ryan K,Elixhauser A,Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007.2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed November 7, 2009.
  4. Centers for Medicare 353(3):255264.
  5. Baker DW,Einstadter D,Husak SS,Cebul RD.Trends in postdischarge mortality and readmissions: has length of stay declined too far?Arch Intern Med.2004;164(5):538544.
  6. Vecchiarino P,Bohannon RW,Ferullo J,Maljanian R.Short‐term outcomes and their predictors for patients hospitalized with community‐acquired pneumonia.Heart Lung.2004;33(5):301307.
  7. Dean NC,Bateman KA,Donnelly SM, et al.Improved clinical outcomes with utilization of a community‐acquired pneumonia guideline.Chest.2006;130(3):794799.
  8. Gleason PP,Meehan TP,Fine JM,Galusha DH,Fine MJ.Associations between initial antimicrobial therapy and medical outcomes for hospitalized elderly patients with pneumonia.Arch Intern Med.1999;159(21):25622572.
  9. Benbassat J,Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  10. Coleman EA,Parry C,Chalmers S,Min S.The care transitions intervention: results of a randomized controlled trial.Arch Intern Med.2006;166(17):18221828.
  11. Corrigan JM, Eden J, Smith BM, eds.Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Committee on Enhancing Federal Healthcare Quality Programs.Washington, DC:National Academies Press,2003.
  12. Medicare.gov—Hospital Compare. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp?version=default1(1):2937.
  13. Krumholz HM,Normand ST,Spertus JA,Shahian DM,Bradley EH.Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement.Health Aff.2007;26(1):7585.
  14. NQF‐Endorsed® Standards. Available at: http://www.qualityforum.org/Measures_List.aspx. Accessed November 6,2009.
  15. Houck PM,Bratzler DW,Nsa W,Ma A,Bartlett JG.Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community‐acquired pneumonia.Arch Intern Med.2004;164(6):637644.
  16. Pope G,Ellis R,Ash A. Diagnostic Cost Group Hierarchical Condition Category Models for Medicare Risk Adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc;2000. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed November 7, 2009.
  17. Harrell FEJ.Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis.1st ed.New York:Springer;2006.
  18. National Quality Forum—Measure Evaluation Criteria.2008. Available at: http://www.qualityforum.org/uploadedFiles/Quality_Forum/Measuring_Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008–08‐28Final.pdf?n=4701.
  19. Naylor MD,Brooten D,Campbell R, et al.Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial.JAMA.1999;281(7):613620.
  20. Davis K.Paying for care episodes and care coordination.N Engl J Med.2007;356(11):11661168.
  21. Luft HS.Health care reform—toward more freedom, and responsibility, for physicians.N Engl J Med.2009;361(6):623628.
  22. Rosenthal MB.Beyond pay for performance—emerging models of provider‐payment reform.N Engl J Med.2008;359(12):11971200.
References
  1. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
  2. Medicare Payment Advisory Commission.Report to the Congress: Promoting Greater Efficiency in Medicare.2007.
  3. Levit K,Wier L,Ryan K,Elixhauser A,Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007.2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed November 7, 2009.
  4. Centers for Medicare 353(3):255264.
  5. Baker DW,Einstadter D,Husak SS,Cebul RD.Trends in postdischarge mortality and readmissions: has length of stay declined too far?Arch Intern Med.2004;164(5):538544.
  6. Vecchiarino P,Bohannon RW,Ferullo J,Maljanian R.Short‐term outcomes and their predictors for patients hospitalized with community‐acquired pneumonia.Heart Lung.2004;33(5):301307.
  7. Dean NC,Bateman KA,Donnelly SM, et al.Improved clinical outcomes with utilization of a community‐acquired pneumonia guideline.Chest.2006;130(3):794799.
  8. Gleason PP,Meehan TP,Fine JM,Galusha DH,Fine MJ.Associations between initial antimicrobial therapy and medical outcomes for hospitalized elderly patients with pneumonia.Arch Intern Med.1999;159(21):25622572.
  9. Benbassat J,Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  10. Coleman EA,Parry C,Chalmers S,Min S.The care transitions intervention: results of a randomized controlled trial.Arch Intern Med.2006;166(17):18221828.
  11. Corrigan JM, Eden J, Smith BM, eds.Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Committee on Enhancing Federal Healthcare Quality Programs.Washington, DC:National Academies Press,2003.
  12. Medicare.gov—Hospital Compare. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp?version=default1(1):2937.
  13. Krumholz HM,Normand ST,Spertus JA,Shahian DM,Bradley EH.Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement.Health Aff.2007;26(1):7585.
  14. NQF‐Endorsed® Standards. Available at: http://www.qualityforum.org/Measures_List.aspx. Accessed November 6,2009.
  15. Houck PM,Bratzler DW,Nsa W,Ma A,Bartlett JG.Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community‐acquired pneumonia.Arch Intern Med.2004;164(6):637644.
  16. Pope G,Ellis R,Ash A. Diagnostic Cost Group Hierarchical Condition Category Models for Medicare Risk Adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc;2000. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed November 7, 2009.
  17. Harrell FEJ.Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis.1st ed.New York:Springer;2006.
  18. National Quality Forum—Measure Evaluation Criteria.2008. Available at: http://www.qualityforum.org/uploadedFiles/Quality_Forum/Measuring_Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008–08‐28Final.pdf?n=4701.
  19. Naylor MD,Brooten D,Campbell R, et al.Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial.JAMA.1999;281(7):613620.
  20. Davis K.Paying for care episodes and care coordination.N Engl J Med.2007;356(11):11661168.
  21. Luft HS.Health care reform—toward more freedom, and responsibility, for physicians.N Engl J Med.2009;361(6):623628.
  22. Rosenthal MB.Beyond pay for performance—emerging models of provider‐payment reform.N Engl J Med.2008;359(12):11971200.
Issue
Journal of Hospital Medicine - 6(3)
Issue
Journal of Hospital Medicine - 6(3)
Page Number
142-150
Page Number
142-150
Publications
Publications
Article Type
Display Headline
Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia
Display Headline
Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia
Legacy Keywords
cost analysis, cost per day, end of life, hospice, length of stay, palliative care, triggers
Legacy Keywords
cost analysis, cost per day, end of life, hospice, length of stay, palliative care, triggers
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut Street, Springfield, MA 01199
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Readmission and Mortality [Rates] in Pneumonia

Article Type
Changed
Sun, 05/28/2017 - 20:18
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Article PDF
Issue
Journal of Hospital Medicine - 5(6)
Publications
Page Number
E12-E18
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article PDF
Article PDF

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Issue
Journal of Hospital Medicine - 5(6)
Issue
Journal of Hospital Medicine - 5(6)
Page Number
E12-E18
Page Number
E12-E18
Publications
Publications
Article Type
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut St., Springfield, MA 01199
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media