Affiliations
Department of Health Systems Management, Rush University, Chicago, Illinois
Department of Medicine, Rush University Medical Center, Chicago, Illinois
Given name(s)
Tricia
Family name
Johnson
Degrees
PhD

HAC Diagnosis Code and MS‐DRG Assignment

Article Type
Changed
Sun, 05/21/2017 - 13:48
Display Headline
Association of the position of a hospital‐acquired condition diagnosis code with changes in medicare severity diagnosis‐related group assignment

One financial incentive to improve quality of care is the Centers for Medicare and Medicaid Services' (CMS) policy to not pay additionally for certain adverse events that are classified as hospital‐acquired conditions (HACs).[1, 2, 3] HACs are specific conditions that occur during the hospital stay and presumably could have been prevented.[4, 5, 6] Under the CMS policy, if an HAC occurs during a patient's stay, that condition is not included in the Medicare Severity Diagnosis‐Related Group (MS‐DRG) assignment.

The MS‐DRG assigned to a patient discharge determines reimbursement. Each MS‐DRG is assigned a weight, which is used to adjust for the fact that the treatment of different conditions consume different resources and have difference costs. Groups of patients who are expected to require above‐average resources have a higher weight than those who require fewer resources, and higher‐weighted MS‐DRG assignment results in a higher payment. In some cases, the inclusion of the diagnosis code of an HAC in the determination of the MS‐DRG results in a higher complexity level and higher DRG weight. The policy is designed to shift the incremental costs associated with treating the HAC to the hospital. As of October 2009, there were 10 HACs included in the CMS nonpayment program (see Supporting Table 1 in the online version of this article). CMS expanded the list of HACs to include 13 conditions in 2013.

Characteristics of Patients With a Hospital‐Acquired Condition Discharged Between October 2007 and April 2008 (N=7,027)
VariableMS‐DRG Change, No. (%) or MSD, N=980No MS‐DRG Change, No. (%) or MSD, N=6,047P Value
  • NOTE: Abbreviations: DVT, deep venous thrombosis; HAC, hospital‐acquired conditions; ICD‐9, International Classification of Diseases, 9th Revision; M, mean; MS‐DRG, Medicare Severity Diagnosis‐Related Group; SD, standard deviation; UTI, urinary tract infection.

Patient sociodemographic characteristics
Age, y62.718.957.521.9<0.001
Race   
White687 (70.1)4,006 (66.3)0.024
Black166 (16.9)1,100 (18.2) 
Hispanic45 (4.6)416 (6.9) 
Other82 (8.4)525 (8.7) 
Sex  <0.001
Male441 (45.0)3,298 (54.5) 
Female539 (55.0)2,749 (45.5) 
Payer  <0.001
Commercial279 (28.5)1,609 (26.6) 
Medicaid88 (9.0)910 (15.1) 
Medicare532 (54.3)3,003 (49.7) 
Self‐pay/charity52 (5.3)331 (5.5) 
Other29 (3.0)194 (3.2) 
Severity of illness  <0.001
Minor50 (5.1)71 (1.2) 
Moderate216 (22.0)359 (5.9) 
Major599 (61.1)1,318 (21.8) 
Extreme115 (11.7)4,299 (71.1) 
Patient clinical characteristics
Number of ICD‐9 diagnosis codes per patient13.76.020.26.6<0.001
MS‐DRG weight2.92.15.96.1<0.001
Hospital characteristics
Mean number of ICD‐9 diagnosis codes per patient per hospital8.51.48.61.40.280
Total hospital discharges15,9576,55316,8576,634<0.001
HACs per 1,000 discharges9.83.710.23.7<0.001
Hospital‐acquired condition
Type of HAC  <0.001
Pressure ulcer334 (34.1)1,599 (26.4) 
Falls/trauma96 (9.8)440 (7.3) 
Catheter‐associated UTI19 (1.9)215 (3.6) 
Vascular catheter infection26 (2.7)1,179 (19.5) 
DVT/pulmonary embolism448 (45.7)2,145 (35.5) 
Other conditions57 (5.8)469 (7.8) 
HAC position  <0.001
2nd code850 (86.7)697 (11.5) 
3rd code45 (4.6)739 (12.2) 
4th code30 (3.1)641 (10.6) 
5th code15 (1.5)569 (9.4) 
6th code or higher40 (4.1)3,401 (56.2) 

Withholding additional reimbursement for an HAC has been controversial. One area of debate is that the assignment of an HAC may be imprecise, in part due to the variation in how physicians document in the medical record.[1, 2, 6, 7, 8, 9] Coding is derived from documentation in physician notes and is the primary mechanism for assigning International Classification of Diseases, 9th Revision, Clinical Modification (ICD‐9) diagnosis codes to the patient's encounter. The coding process begins with health information technicians (ie, medical record coders) reviewing all medical record documentation to assign diagnosis and procedure codes using the ICD‐9 codes.[10] Primary and secondary diagnoses are determined by certain definitions in the hospital setting. Secondary diagnoses can be further separated into complications or comorbidities in the MS‐DRG system, which can affect reimbursement. The MS‐DRG is then determined using these diagnosis and procedure codes. Physician documentation is the principal source of data for hospital billing, because health information technicians (ie, medical record coders) must assign a code based on what is documented in the chart. If key medical detail is missing or language is ambiguous, then coding can be inaccurate, which may lead to inappropriate compensation.[11]

Accurate and complete ICD‐9 diagnosis and procedure coding is essential for correct MS‐DRG assignment and reimbursement.[12] Physicians may influence coding prioritization by either over‐emphasizing a patient diagnosis or by downplaying the significance of new findings. In addition, unless the physician uses specific, accurate, and accepted terminology, the diagnosis may not even appear in the list of diagnosis codes. Medical records with nonstandard abbreviations may result in coder‐omission of key diagnoses. Finally, when clinicians use qualified diagnoses such as rule‐out or probable, the final diagnosis coded may not be accurate.[10]

Although the CMS policy creates a financial incentive for hospitals to improve quality, the extent to which the policy actually impacts reimbursement across multiple HACs has not been quantified. Additionally, if HACsas a policy initiativereflect actual quality of care, then the position of the ICD‐9 code should not affect MS‐DRG assignment. In this study we evaluated the extent to which MS‐DRG assignment would have been influenced by the presence of an HAC and tested the association of the position of an HAC in the list of ICD‐9 diagnosis codes with changes in MS‐DRG assignment.

METHODS

Study Population

This study was a retrospective analysis of all patients discharged from hospital members of the University HealthSystem Consortium's (UHC) Clinical Data Base between October 2007 and April 2008. The data set was limited to patient discharge records with at least 1 of 10 HACs for which CMS no longer provides additional reimbursement (see Supporting Table 1 in the online version of this article). The presence of an HAC was indicated by the corresponding diagnosis code using the ICD‐9 diagnosis and procedure codes.

Data Source

UHC's Clinical Data Base is a database of patient discharge‐level administrative data used primarily for billing purposes. UHC's Clinical Data Base provides comparative data for in‐hospital healthcare outcomes using encounter‐level and line‐item transactional information from each member organization. UHC is a nonprofit alliance of 116 academic medical centers and 276 of their affiliated hospitals.

Dependent Variable: Change in MS‐DRG Assignment

The dependent variable was a change in MS‐DRG assignment. MS‐DRG assignment was calculated by comparing the MS‐DRG assigned when the HAC's ICD‐9 diagnosis code was considered a no‐payment event and was not included in the determination (ie, post‐policy DRG) with the MS‐DRG that would have been assigned when the HAC was not included in the determination (ie, pre‐policy DRG). The list of ICD‐9 diagnosis codes was entered into MS‐DRG grouping software with the ICD‐9 diagnosis code for each HAC in the identical position presented to CMS. Up to 29 secondary ICD‐9 diagnosis and procedure codes were entered, but the analyses of association on the position of the HAC used the first 9 diagnosis and 6 procedure codes processed by CMS, as only codes in these positions would have changed the MS‐DRG assigned during the study time period. If the 2 MS‐DRGs (pre‐policy DRG and post‐policy DRG) did not match, the case was classified as having a change in MS‐DRG assignment (MS‐DRG change).

Independent variables included in this analysis were coding variables and patient characteristics. Coding variables included the total number of ICD‐9 diagnosis codes recorded in the discharge record, absolute position of the HAC ICD‐9 diagnosis code in the order of all diagnosis codes, weight for the actual MS‐DRG, and specific type of HAC. The absolute position of the HAC was included in the analysis as a categorical variable (second position, third, fourth, fifth, and sixth position and higher). In addition, patient‐level characteristics including sociodemographic characteristics, clinical factors and severity of illness (minor, moderate, major, extreme),[6] and hospital‐level characteristics.

Statistical Analysis

Means and standard deviations or frequencies and percentages were used to describe the variables. A 2 test was used to test for differences in the absolute position of the HAC with change in MS‐DRG assignment (change/no change). In addition, 2 tests were used to test for differences in each of the other categorical independent variables with change in MS‐DRG assignment; t tests were used to test for differences in the continuous variables with change in MS‐DRG assignment.

Two multivariable binary logistic regression models were fit to test the relationship between change in MS‐DRG assignment with the absolute position of the HAC, adjusting for coding variables, patient characteristics, and hospital characteristics that were associated with change in MS‐DRG assignment in the bivariate analysis. The first model tested the relationship between change in MS‐DRG and position of the HAC, without accounting for the specific type of HAC, and the second tested the relationship including both position and the specific type of HAC. Receiver operating characteristic (ROC) curves were developed for each model to evaluate the predictive accuracy. Additionally, analyses were stratified by severity of illness, and the areas under the ROC curves for 3 models were compared to determine whether the predictive accuracy increased with the inclusion of variables other than HAC position. The first model included HAC position only, the second model added type of HAC, and the third model added other coding variables and patient‐ and hospital‐level variables.

Two sensitivity analyses were performed to test the robustness of the results. The first analysis tested the sensitivity of the results to the specification of comorbid disease burden, as measured by number of diagnosis codes. We used Elixhauser's method[13] for identifying comorbid conditions to create binary variables indicating the presence or absence of 29 distinct comorbid conditions, then calculated the total number of comorbid conditions. The binary logistic regression model was refit, with the total number of comorbid conditions in place of the number of diagnosis codes. An additional binary logistic regression model was fit that included the individual comorbid conditions that were associated with change in MS‐DRG assignment in a bivariate analysis (P<0.05). The second sensitivity analysis evaluated whether hospital‐level variation in coding practices explained change in MS‐DRG assignment using a hierarchical binary logistic regression model that included hospital as a random effect.

All statistical analyses were conducted using the SAS version 9.2 statistical software package (SAS Institute Inc., Cary, NC). The Rush University Medical Center Institutional Review Board approved the study protocol.

RESULTS

Of the 954,946 discharges from UHC academic medical centers, 7027 patients (0.7%) had an HAC. Of the patients with an HAC, 6047 did not change MS‐DRG assignment, whereas 980 patients (13.8%) had a change in MS‐DRG assignment. Patients with a change in MS‐DRG assignment were significantly different from those without a change in MS‐DRG assignment on all patient‐level characteristics and all but 1 hospital characteristic (Table 1). The variable with the largest absolute difference between those with and without a change in MS‐DRG was the actual position of the HAC; 86.7% of those with an MS‐DRG change had their HAC in the second position, whereas those without a change had only 11.5% in the second position.

After controlling for patient and hospital characteristics, an HAC in the second position in the list of ICD‐9 codes was associated with the greatest likelihood of a change in MS‐DRG assignment (P<0.001) (Table 2). Each additional ICD‐9 code decreased the odds of an MS‐DRG change (P=0.004), demonstrating that having more secondary diagnosis codes was associated with a lesser likelihood of an MS‐DRG change. After including the individual HACs in the regression model, the second position remained associated with the likelihood of a change in MS‐DRG assignment (results not shown). The predictive accuracy of our model did not improve, however, with the addition of type of HAC. The area under the ROC curve was 0.94 in both models, indicating high predictive power.

Results of Binary Logistic Regression Model for Change in MS‐DRG Assignment (N=7,027)
InterceptOdds RatioP Value
  • NOTE: The reference category for includes extreme severity of illness and HAC ICD‐9 code in the 6th position or higher. The model controls for patient age, sex, race/ethnicity, primary payer, hospital HAC rate, and total number of discharges per hospital. Abbreviations: HAC, hospital‐acquired conditions; ICD‐9, International Classification of Diseases, 9th Revision; MS‐DRG, Medicare Severity Diagnosis‐Related Group; ROC, receiver operating characteristic. *Compared to the model with patient sociodemographic characteristics only.

Minor severity of illness6.80<0.001
Moderate severity of illness5.52<0.001
Major severity of illness8.02<0.001
Number of ICD‐9 diagnosis codes per patient0.970.004
HAC ICD‐9 diagnosis code in 2nd position40.52<0.001
HAC ICD‐9 diagnosis code in 3rd position1.820.009
HAC ICD‐9 diagnosis code in 4th position1.720.032
HAC ICD‐9 diagnosis code in 5th position1.150.662
Area under the ROC curve0.94<0.001*
Area under the ROC curve, model with patient socio‐demographic characteristics only0.85 

The proportion of cases with a change in MS‐DRG by severity of illness is reported in Table 3. The largest proportion of cases with a change in MS‐DRG was in the minor severity of illness category (41.3%), whereas only 2.6% of cases with an extreme severity of illness had a change in MS‐DRG. Figure 1 shows ROC curves stratified by severity of illness. Figure 1A illustrates the ROC curves for the 121 (1.7%) patients with minor severity of illness. The area under the ROC curve for the model including HAC position only was 0.74, indicating moderate predictive power. The inclusion of HAC type increased the predictive power to 0.91, and inclusion of sociodemographic characteristics further increased the predictive power to 0.95. Figure 1BD illustrates the ROC curves for moderate, major, and extreme severities of illness. For more severe illnesses, the predictive accuracy of the models with only HAC position were similar to the full models, demonstrating that HAC position alone had a high predictive power for change in MS‐DRG assignment.

Percentage of Patients With a Change in MS‐DRG by Severity of Illness, Discharges Between October 2007 and April 2008 (N=7,027)
VariableNo.Within Category Percent With MS‐DRG Change
  • NOTE: Abbreviations: MS‐DRG, Medicare Severity Diagnosis‐Related Group.

Severity of illness  
Minor12141.3
Moderate57537.6
Major1,91731.3
Extreme4,4142.6
Figure 1
Receiver operating characteristic (ROC) curves stratified by severity of illness. ROC curves by severity of illness. Abbreviations: AUC, area under the curve; HAC, hospital‐acquired conditions.

In a sensitivity analysis that evaluated the robustness of our results to the specification of disease burden, inclusion of the number of comorbid conditions did not improve the predictive accuracy of the model. Although inclusion of individual comorbid conditions rather than number of diagnosis codes attenuated the odds ratio (OR) for HAC position (OR: 40.5 in the original model vs OR: 32.9 in the model with individual comorbid conditions), the improvement of the predictive accuracy of the model was small (area under the ROC curve=0.936 in the original model vs 0.943 in the model with individual conditions, P<0.001) (results not shown). In a sensitivity analysis using a hierarchical logistic regression model that included hospital random effects, hospital‐level variation in coding practices did not attenuate the relationship between HAC position and MS‐DRG change (results not shown).

DISCUSSION

This study investigated the association of a change in MS‐DRG assignment and position of the ICD‐9 diagnosis codes for HACs in a sample of patients discharged from US academic medical centers. We found that only 14% of the MS‐DRGs for patients with an HAC would have experienced a change in DRG assignment. Our results are consistent with those of Teufack et al.,[14] who estimated the economic impact of CMS' HAC policy for neurosurgery services at a single hospital to be 0.007% of overall net revenues. Nevertheless, the majority of hospitals have increased their efforts to prevent HACs that are included in CMS' policy.[15] At the same time, most hospitals have not increased their budgets for preventing HACs, and instead have reallocated resources from nontargeted HACs to those included in CMS' policy.

The low proportion of records that are impacted by the policy may be partially explained by the fact that CMS' policy only has an impact on reimbursement for MS‐DRGs with multiple levels. For example, heart failure has 3 levels of reimbursement in the MS‐DRG system (Table 4). Prior to CMS' policy, a heart failure patient with an air embolism as an HAC would have been classified in the most severe MS‐DRG (291), whereas after implementation the patient would be classified in the least severe MS‐DRG, if no other complication or comorbidity (CC) or a major complication or comorbidity (MCC) were present. Chest pain has only 1 level, and reimbursement for a patient with an HAC and classified in the chest pain MS‐DRG would not be impacted by CMS' policy. Most hospitalized patients are complicated, and the proportion of patients who are complicated will continue to increase over time as less complex care shifts to the ambulatory setting. The relative effectiveness of CMS' policy is likely to diminish with the continued shift of care to the ambulatory setting.

Example of MS‐DRG Codes and Weights, Fiscal Year 2014
VariableMS‐DRGDRG Weight
  • NOTE: Abbreviations: DRG, Diagnosis‐Related Group; MS‐DRG, Medicare Severity Diagnosis‐Related Group.

Heart failure and shock  
With major complications and comorbidities (MS‐DRG 291)2911.5062
With complications and comorbidities2920.9952
Without major complications or comorbidities2930.6718
Chest pain3130.5992

Patient discharges with a diagnosis code for as HAC in the second position were substantially more likely to have a change in MS‐DRG assignment compared to cases with an HAC listed lower in the final list of diagnosis codes. Perhaps it is not surprising that MS‐DRG assignment is most likely to change when the HAC is in the second position, because an ICD‐9 diagnosis code in this position is more likely to be a major complication or comorbidity. For HACs listed in a lower position of the list of ICD‐9 diagnosis codes, it is likely that the patient had another major complication or comorbidity listed in the second position that would have maintained classification in the same MS‐DRG. Our results suggest that physicians and hospitals caring for patients with lower complexity of illness will sustain a higher financial burden as a result of an HAC under CMS' policy compared to providers whose patients sustain the exact same HAC but have underlying medical care of greater complexity.

These results raise further concerns about the ability of CMS' payment policy to improve quality. One criticism of CMS' policy is that all HACs are not universally preventable. If they are not preventable, payment reductions promulgated via the policy would be punitive rather than incentivizing. In their study of central catheter‐associated bloodstream infections and catheter‐associated urinary tract infections, for example, Lee et al. found no change in infection rates after implementation of CMS' policy.[16] As such, some have suggested HACs should not be used to determine reimbursement, and CMS should abandon its current nonpayment policy.[4, 17] Our findings echo this criticism given that the financial penalty for an HAC depends on whether a patient is more or less complex.

Because coding emanates from physician documentation, a uniform documentation process must exist to ensure nonvariable coding practices.[1, 2, 7, 9] This is not the case, however, and some hospitals comanage documentation to refine or maximize the number of ICD‐9 diagnosis and procedure codes. Furthermore, there are certain differences in the documentation practices of individual physicians. If physician documentation and coding variation leads to fewer ICD‐9 codes during an encounter, the chance that an HAC will influence MS‐DRG change increases.

Another source of variation in coding practices found in this study was code sequencing. Although guidelines for appropriate ICD‐9 diagnosis coding currently exist, individual subjectivity remains. The most essential step in the coding process is identifying the principal diagnosis by extrapolating from physician documentation and clinical data. For example, when a patient is admitted for chest pain, and after some evaluation it is determined that the patient experienced a myocardial infarction, then myocardial infarction becomes the principal diagnosis. Based on that principal diagnosis, coders must select the relevant secondary diagnoses. The process involves a series of steps that must be followed exactly in order to ensure accurate coding.[12] There are no guidelines by which coding personnel must follow to sequence secondary diagnoses, with the exception of listed MCCs and CCs prior to other secondary diagnoses. Ultimately, the order by which these codes are assigned may result in unfavorable variation in MS‐DRG assignment.[1, 2, 4, 7, 8, 9, 17]

There are a number of limitations to this study. First, our cohort included only UHC‐affiliated academic medical centers, which may not represent all acute‐care hospitals and their coding practices. Although our data are for discharges prior to implementation of the policy, we were able to analyze the anticipated impact of the policy prior to any direct or indirect changes in coding that may have occurred in response to CMS' policy. Additionally, the number of diagnosis codes accepted by CMS was expanded from 9 to 25 in 2011. Future analyses that include MS‐DRG classifications with the expanded number of diagnosis codes should be conducted to validate our findings and determine whether any changes have occurred over time. It is not known whether low illness severity scores signify patient or hospital characteristics. If they represent patient characteristics, then CMS' policy will disproportionately affect hospitals taking care of less severely ill patients. Alternatively, if hospital coding practice explains more of the variation in the number of ICD‐9 codes (and thus severity of illness), then the system of adjudicating reimbursement via HACs to incentivize quality of care will be flawed, as there is no standard position for HACs on a more lengthy diagnosis list. Finally, we did not evaluate the change in DRG weight with the reassignment of MS‐DRG if the HAC had been included in the calculation. Future work should evaluate whether there is a differential impact of the policy by change in MS‐DRG weight.

CONCLUSION

Under CMS' current policy, hospitals and physicians caring for patients with lower severity of illness and have an HAC will be penalized by CMS disproportionately more than those caring for more complex, sicker patients with the identical HAC. If, in fact, HACs are indicators of a hospital's quality of care, then the CMS policy will likely do little to foster improved quality unless there is a reduction in coding practice variation and modifications to ensure that the policy impacts reimbursement, independent of severity of illness.

Disclosures

The authors acknowledge the financial support for data acquisition from the Rush University College of Health Sciences. The authors report no conflicts of interest.

Files
References
  1. Centers for Medicare and Medicaid Services. Hospital‐acquired conditions (present on admission indicator). Available at: http://www.cms.hhs.gov/HospitalAcqCond/05_Coding.asp#TopOfPage. Updated 2012. Accessed September 20, 2012.
  2. Centers for Medicare and Medicaid Services. Hospital‐acquired conditions: coding. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/HospitalAcqCond/Coding.html. Updated 2012. Accessed February 2, 2012.
  3. ICD‐9‐CM 2009 Coders' Desk Reference for Procedures. Eden Prairie, MN: Ingenix; 2009.
  4. Averill RF, Hughes JS, Goldfield NI, McCullough EC. Hospital complications: linking payment reduction to preventability. Jt Comm J Qual Patient Saf. 2009;35(5):283285.
  5. McNutt R, Johnson TJ, Odwazny R, et al. Change in MS‐DRG assignment and hospital reimbursement as a result of Centers for Medicare
Article PDF
Issue
Journal of Hospital Medicine - 9(11)
Page Number
707-713
Sections
Files
Files
Article PDF
Article PDF

One financial incentive to improve quality of care is the Centers for Medicare and Medicaid Services' (CMS) policy to not pay additionally for certain adverse events that are classified as hospital‐acquired conditions (HACs).[1, 2, 3] HACs are specific conditions that occur during the hospital stay and presumably could have been prevented.[4, 5, 6] Under the CMS policy, if an HAC occurs during a patient's stay, that condition is not included in the Medicare Severity Diagnosis‐Related Group (MS‐DRG) assignment.

The MS‐DRG assigned to a patient discharge determines reimbursement. Each MS‐DRG is assigned a weight, which is used to adjust for the fact that the treatment of different conditions consume different resources and have difference costs. Groups of patients who are expected to require above‐average resources have a higher weight than those who require fewer resources, and higher‐weighted MS‐DRG assignment results in a higher payment. In some cases, the inclusion of the diagnosis code of an HAC in the determination of the MS‐DRG results in a higher complexity level and higher DRG weight. The policy is designed to shift the incremental costs associated with treating the HAC to the hospital. As of October 2009, there were 10 HACs included in the CMS nonpayment program (see Supporting Table 1 in the online version of this article). CMS expanded the list of HACs to include 13 conditions in 2013.

Characteristics of Patients With a Hospital‐Acquired Condition Discharged Between October 2007 and April 2008 (N=7,027)
VariableMS‐DRG Change, No. (%) or MSD, N=980No MS‐DRG Change, No. (%) or MSD, N=6,047P Value
  • NOTE: Abbreviations: DVT, deep venous thrombosis; HAC, hospital‐acquired conditions; ICD‐9, International Classification of Diseases, 9th Revision; M, mean; MS‐DRG, Medicare Severity Diagnosis‐Related Group; SD, standard deviation; UTI, urinary tract infection.

Patient sociodemographic characteristics
Age, y62.718.957.521.9<0.001
Race   
White687 (70.1)4,006 (66.3)0.024
Black166 (16.9)1,100 (18.2) 
Hispanic45 (4.6)416 (6.9) 
Other82 (8.4)525 (8.7) 
Sex  <0.001
Male441 (45.0)3,298 (54.5) 
Female539 (55.0)2,749 (45.5) 
Payer  <0.001
Commercial279 (28.5)1,609 (26.6) 
Medicaid88 (9.0)910 (15.1) 
Medicare532 (54.3)3,003 (49.7) 
Self‐pay/charity52 (5.3)331 (5.5) 
Other29 (3.0)194 (3.2) 
Severity of illness  <0.001
Minor50 (5.1)71 (1.2) 
Moderate216 (22.0)359 (5.9) 
Major599 (61.1)1,318 (21.8) 
Extreme115 (11.7)4,299 (71.1) 
Patient clinical characteristics
Number of ICD‐9 diagnosis codes per patient13.76.020.26.6<0.001
MS‐DRG weight2.92.15.96.1<0.001
Hospital characteristics
Mean number of ICD‐9 diagnosis codes per patient per hospital8.51.48.61.40.280
Total hospital discharges15,9576,55316,8576,634<0.001
HACs per 1,000 discharges9.83.710.23.7<0.001
Hospital‐acquired condition
Type of HAC  <0.001
Pressure ulcer334 (34.1)1,599 (26.4) 
Falls/trauma96 (9.8)440 (7.3) 
Catheter‐associated UTI19 (1.9)215 (3.6) 
Vascular catheter infection26 (2.7)1,179 (19.5) 
DVT/pulmonary embolism448 (45.7)2,145 (35.5) 
Other conditions57 (5.8)469 (7.8) 
HAC position  <0.001
2nd code850 (86.7)697 (11.5) 
3rd code45 (4.6)739 (12.2) 
4th code30 (3.1)641 (10.6) 
5th code15 (1.5)569 (9.4) 
6th code or higher40 (4.1)3,401 (56.2) 

Withholding additional reimbursement for an HAC has been controversial. One area of debate is that the assignment of an HAC may be imprecise, in part due to the variation in how physicians document in the medical record.[1, 2, 6, 7, 8, 9] Coding is derived from documentation in physician notes and is the primary mechanism for assigning International Classification of Diseases, 9th Revision, Clinical Modification (ICD‐9) diagnosis codes to the patient's encounter. The coding process begins with health information technicians (ie, medical record coders) reviewing all medical record documentation to assign diagnosis and procedure codes using the ICD‐9 codes.[10] Primary and secondary diagnoses are determined by certain definitions in the hospital setting. Secondary diagnoses can be further separated into complications or comorbidities in the MS‐DRG system, which can affect reimbursement. The MS‐DRG is then determined using these diagnosis and procedure codes. Physician documentation is the principal source of data for hospital billing, because health information technicians (ie, medical record coders) must assign a code based on what is documented in the chart. If key medical detail is missing or language is ambiguous, then coding can be inaccurate, which may lead to inappropriate compensation.[11]

Accurate and complete ICD‐9 diagnosis and procedure coding is essential for correct MS‐DRG assignment and reimbursement.[12] Physicians may influence coding prioritization by either over‐emphasizing a patient diagnosis or by downplaying the significance of new findings. In addition, unless the physician uses specific, accurate, and accepted terminology, the diagnosis may not even appear in the list of diagnosis codes. Medical records with nonstandard abbreviations may result in coder‐omission of key diagnoses. Finally, when clinicians use qualified diagnoses such as rule‐out or probable, the final diagnosis coded may not be accurate.[10]

Although the CMS policy creates a financial incentive for hospitals to improve quality, the extent to which the policy actually impacts reimbursement across multiple HACs has not been quantified. Additionally, if HACsas a policy initiativereflect actual quality of care, then the position of the ICD‐9 code should not affect MS‐DRG assignment. In this study we evaluated the extent to which MS‐DRG assignment would have been influenced by the presence of an HAC and tested the association of the position of an HAC in the list of ICD‐9 diagnosis codes with changes in MS‐DRG assignment.

METHODS

Study Population

This study was a retrospective analysis of all patients discharged from hospital members of the University HealthSystem Consortium's (UHC) Clinical Data Base between October 2007 and April 2008. The data set was limited to patient discharge records with at least 1 of 10 HACs for which CMS no longer provides additional reimbursement (see Supporting Table 1 in the online version of this article). The presence of an HAC was indicated by the corresponding diagnosis code using the ICD‐9 diagnosis and procedure codes.

Data Source

UHC's Clinical Data Base is a database of patient discharge‐level administrative data used primarily for billing purposes. UHC's Clinical Data Base provides comparative data for in‐hospital healthcare outcomes using encounter‐level and line‐item transactional information from each member organization. UHC is a nonprofit alliance of 116 academic medical centers and 276 of their affiliated hospitals.

Dependent Variable: Change in MS‐DRG Assignment

The dependent variable was a change in MS‐DRG assignment. MS‐DRG assignment was calculated by comparing the MS‐DRG assigned when the HAC's ICD‐9 diagnosis code was considered a no‐payment event and was not included in the determination (ie, post‐policy DRG) with the MS‐DRG that would have been assigned when the HAC was not included in the determination (ie, pre‐policy DRG). The list of ICD‐9 diagnosis codes was entered into MS‐DRG grouping software with the ICD‐9 diagnosis code for each HAC in the identical position presented to CMS. Up to 29 secondary ICD‐9 diagnosis and procedure codes were entered, but the analyses of association on the position of the HAC used the first 9 diagnosis and 6 procedure codes processed by CMS, as only codes in these positions would have changed the MS‐DRG assigned during the study time period. If the 2 MS‐DRGs (pre‐policy DRG and post‐policy DRG) did not match, the case was classified as having a change in MS‐DRG assignment (MS‐DRG change).

Independent variables included in this analysis were coding variables and patient characteristics. Coding variables included the total number of ICD‐9 diagnosis codes recorded in the discharge record, absolute position of the HAC ICD‐9 diagnosis code in the order of all diagnosis codes, weight for the actual MS‐DRG, and specific type of HAC. The absolute position of the HAC was included in the analysis as a categorical variable (second position, third, fourth, fifth, and sixth position and higher). In addition, patient‐level characteristics including sociodemographic characteristics, clinical factors and severity of illness (minor, moderate, major, extreme),[6] and hospital‐level characteristics.

Statistical Analysis

Means and standard deviations or frequencies and percentages were used to describe the variables. A 2 test was used to test for differences in the absolute position of the HAC with change in MS‐DRG assignment (change/no change). In addition, 2 tests were used to test for differences in each of the other categorical independent variables with change in MS‐DRG assignment; t tests were used to test for differences in the continuous variables with change in MS‐DRG assignment.

Two multivariable binary logistic regression models were fit to test the relationship between change in MS‐DRG assignment with the absolute position of the HAC, adjusting for coding variables, patient characteristics, and hospital characteristics that were associated with change in MS‐DRG assignment in the bivariate analysis. The first model tested the relationship between change in MS‐DRG and position of the HAC, without accounting for the specific type of HAC, and the second tested the relationship including both position and the specific type of HAC. Receiver operating characteristic (ROC) curves were developed for each model to evaluate the predictive accuracy. Additionally, analyses were stratified by severity of illness, and the areas under the ROC curves for 3 models were compared to determine whether the predictive accuracy increased with the inclusion of variables other than HAC position. The first model included HAC position only, the second model added type of HAC, and the third model added other coding variables and patient‐ and hospital‐level variables.

Two sensitivity analyses were performed to test the robustness of the results. The first analysis tested the sensitivity of the results to the specification of comorbid disease burden, as measured by number of diagnosis codes. We used Elixhauser's method[13] for identifying comorbid conditions to create binary variables indicating the presence or absence of 29 distinct comorbid conditions, then calculated the total number of comorbid conditions. The binary logistic regression model was refit, with the total number of comorbid conditions in place of the number of diagnosis codes. An additional binary logistic regression model was fit that included the individual comorbid conditions that were associated with change in MS‐DRG assignment in a bivariate analysis (P<0.05). The second sensitivity analysis evaluated whether hospital‐level variation in coding practices explained change in MS‐DRG assignment using a hierarchical binary logistic regression model that included hospital as a random effect.

All statistical analyses were conducted using the SAS version 9.2 statistical software package (SAS Institute Inc., Cary, NC). The Rush University Medical Center Institutional Review Board approved the study protocol.

RESULTS

Of the 954,946 discharges from UHC academic medical centers, 7027 patients (0.7%) had an HAC. Of the patients with an HAC, 6047 did not change MS‐DRG assignment, whereas 980 patients (13.8%) had a change in MS‐DRG assignment. Patients with a change in MS‐DRG assignment were significantly different from those without a change in MS‐DRG assignment on all patient‐level characteristics and all but 1 hospital characteristic (Table 1). The variable with the largest absolute difference between those with and without a change in MS‐DRG was the actual position of the HAC; 86.7% of those with an MS‐DRG change had their HAC in the second position, whereas those without a change had only 11.5% in the second position.

After controlling for patient and hospital characteristics, an HAC in the second position in the list of ICD‐9 codes was associated with the greatest likelihood of a change in MS‐DRG assignment (P<0.001) (Table 2). Each additional ICD‐9 code decreased the odds of an MS‐DRG change (P=0.004), demonstrating that having more secondary diagnosis codes was associated with a lesser likelihood of an MS‐DRG change. After including the individual HACs in the regression model, the second position remained associated with the likelihood of a change in MS‐DRG assignment (results not shown). The predictive accuracy of our model did not improve, however, with the addition of type of HAC. The area under the ROC curve was 0.94 in both models, indicating high predictive power.

Results of Binary Logistic Regression Model for Change in MS‐DRG Assignment (N=7,027)
InterceptOdds RatioP Value
  • NOTE: The reference category for includes extreme severity of illness and HAC ICD‐9 code in the 6th position or higher. The model controls for patient age, sex, race/ethnicity, primary payer, hospital HAC rate, and total number of discharges per hospital. Abbreviations: HAC, hospital‐acquired conditions; ICD‐9, International Classification of Diseases, 9th Revision; MS‐DRG, Medicare Severity Diagnosis‐Related Group; ROC, receiver operating characteristic. *Compared to the model with patient sociodemographic characteristics only.

Minor severity of illness6.80<0.001
Moderate severity of illness5.52<0.001
Major severity of illness8.02<0.001
Number of ICD‐9 diagnosis codes per patient0.970.004
HAC ICD‐9 diagnosis code in 2nd position40.52<0.001
HAC ICD‐9 diagnosis code in 3rd position1.820.009
HAC ICD‐9 diagnosis code in 4th position1.720.032
HAC ICD‐9 diagnosis code in 5th position1.150.662
Area under the ROC curve0.94<0.001*
Area under the ROC curve, model with patient socio‐demographic characteristics only0.85 

The proportion of cases with a change in MS‐DRG by severity of illness is reported in Table 3. The largest proportion of cases with a change in MS‐DRG was in the minor severity of illness category (41.3%), whereas only 2.6% of cases with an extreme severity of illness had a change in MS‐DRG. Figure 1 shows ROC curves stratified by severity of illness. Figure 1A illustrates the ROC curves for the 121 (1.7%) patients with minor severity of illness. The area under the ROC curve for the model including HAC position only was 0.74, indicating moderate predictive power. The inclusion of HAC type increased the predictive power to 0.91, and inclusion of sociodemographic characteristics further increased the predictive power to 0.95. Figure 1BD illustrates the ROC curves for moderate, major, and extreme severities of illness. For more severe illnesses, the predictive accuracy of the models with only HAC position were similar to the full models, demonstrating that HAC position alone had a high predictive power for change in MS‐DRG assignment.

Percentage of Patients With a Change in MS‐DRG by Severity of Illness, Discharges Between October 2007 and April 2008 (N=7,027)
VariableNo.Within Category Percent With MS‐DRG Change
  • NOTE: Abbreviations: MS‐DRG, Medicare Severity Diagnosis‐Related Group.

Severity of illness  
Minor12141.3
Moderate57537.6
Major1,91731.3
Extreme4,4142.6
Figure 1
Receiver operating characteristic (ROC) curves stratified by severity of illness. ROC curves by severity of illness. Abbreviations: AUC, area under the curve; HAC, hospital‐acquired conditions.

In a sensitivity analysis that evaluated the robustness of our results to the specification of disease burden, inclusion of the number of comorbid conditions did not improve the predictive accuracy of the model. Although inclusion of individual comorbid conditions rather than number of diagnosis codes attenuated the odds ratio (OR) for HAC position (OR: 40.5 in the original model vs OR: 32.9 in the model with individual comorbid conditions), the improvement of the predictive accuracy of the model was small (area under the ROC curve=0.936 in the original model vs 0.943 in the model with individual conditions, P<0.001) (results not shown). In a sensitivity analysis using a hierarchical logistic regression model that included hospital random effects, hospital‐level variation in coding practices did not attenuate the relationship between HAC position and MS‐DRG change (results not shown).

DISCUSSION

This study investigated the association of a change in MS‐DRG assignment and position of the ICD‐9 diagnosis codes for HACs in a sample of patients discharged from US academic medical centers. We found that only 14% of the MS‐DRGs for patients with an HAC would have experienced a change in DRG assignment. Our results are consistent with those of Teufack et al.,[14] who estimated the economic impact of CMS' HAC policy for neurosurgery services at a single hospital to be 0.007% of overall net revenues. Nevertheless, the majority of hospitals have increased their efforts to prevent HACs that are included in CMS' policy.[15] At the same time, most hospitals have not increased their budgets for preventing HACs, and instead have reallocated resources from nontargeted HACs to those included in CMS' policy.

The low proportion of records that are impacted by the policy may be partially explained by the fact that CMS' policy only has an impact on reimbursement for MS‐DRGs with multiple levels. For example, heart failure has 3 levels of reimbursement in the MS‐DRG system (Table 4). Prior to CMS' policy, a heart failure patient with an air embolism as an HAC would have been classified in the most severe MS‐DRG (291), whereas after implementation the patient would be classified in the least severe MS‐DRG, if no other complication or comorbidity (CC) or a major complication or comorbidity (MCC) were present. Chest pain has only 1 level, and reimbursement for a patient with an HAC and classified in the chest pain MS‐DRG would not be impacted by CMS' policy. Most hospitalized patients are complicated, and the proportion of patients who are complicated will continue to increase over time as less complex care shifts to the ambulatory setting. The relative effectiveness of CMS' policy is likely to diminish with the continued shift of care to the ambulatory setting.

Example of MS‐DRG Codes and Weights, Fiscal Year 2014
VariableMS‐DRGDRG Weight
  • NOTE: Abbreviations: DRG, Diagnosis‐Related Group; MS‐DRG, Medicare Severity Diagnosis‐Related Group.

Heart failure and shock  
With major complications and comorbidities (MS‐DRG 291)2911.5062
With complications and comorbidities2920.9952
Without major complications or comorbidities2930.6718
Chest pain3130.5992

Patient discharges with a diagnosis code for as HAC in the second position were substantially more likely to have a change in MS‐DRG assignment compared to cases with an HAC listed lower in the final list of diagnosis codes. Perhaps it is not surprising that MS‐DRG assignment is most likely to change when the HAC is in the second position, because an ICD‐9 diagnosis code in this position is more likely to be a major complication or comorbidity. For HACs listed in a lower position of the list of ICD‐9 diagnosis codes, it is likely that the patient had another major complication or comorbidity listed in the second position that would have maintained classification in the same MS‐DRG. Our results suggest that physicians and hospitals caring for patients with lower complexity of illness will sustain a higher financial burden as a result of an HAC under CMS' policy compared to providers whose patients sustain the exact same HAC but have underlying medical care of greater complexity.

These results raise further concerns about the ability of CMS' payment policy to improve quality. One criticism of CMS' policy is that all HACs are not universally preventable. If they are not preventable, payment reductions promulgated via the policy would be punitive rather than incentivizing. In their study of central catheter‐associated bloodstream infections and catheter‐associated urinary tract infections, for example, Lee et al. found no change in infection rates after implementation of CMS' policy.[16] As such, some have suggested HACs should not be used to determine reimbursement, and CMS should abandon its current nonpayment policy.[4, 17] Our findings echo this criticism given that the financial penalty for an HAC depends on whether a patient is more or less complex.

Because coding emanates from physician documentation, a uniform documentation process must exist to ensure nonvariable coding practices.[1, 2, 7, 9] This is not the case, however, and some hospitals comanage documentation to refine or maximize the number of ICD‐9 diagnosis and procedure codes. Furthermore, there are certain differences in the documentation practices of individual physicians. If physician documentation and coding variation leads to fewer ICD‐9 codes during an encounter, the chance that an HAC will influence MS‐DRG change increases.

Another source of variation in coding practices found in this study was code sequencing. Although guidelines for appropriate ICD‐9 diagnosis coding currently exist, individual subjectivity remains. The most essential step in the coding process is identifying the principal diagnosis by extrapolating from physician documentation and clinical data. For example, when a patient is admitted for chest pain, and after some evaluation it is determined that the patient experienced a myocardial infarction, then myocardial infarction becomes the principal diagnosis. Based on that principal diagnosis, coders must select the relevant secondary diagnoses. The process involves a series of steps that must be followed exactly in order to ensure accurate coding.[12] There are no guidelines by which coding personnel must follow to sequence secondary diagnoses, with the exception of listed MCCs and CCs prior to other secondary diagnoses. Ultimately, the order by which these codes are assigned may result in unfavorable variation in MS‐DRG assignment.[1, 2, 4, 7, 8, 9, 17]

There are a number of limitations to this study. First, our cohort included only UHC‐affiliated academic medical centers, which may not represent all acute‐care hospitals and their coding practices. Although our data are for discharges prior to implementation of the policy, we were able to analyze the anticipated impact of the policy prior to any direct or indirect changes in coding that may have occurred in response to CMS' policy. Additionally, the number of diagnosis codes accepted by CMS was expanded from 9 to 25 in 2011. Future analyses that include MS‐DRG classifications with the expanded number of diagnosis codes should be conducted to validate our findings and determine whether any changes have occurred over time. It is not known whether low illness severity scores signify patient or hospital characteristics. If they represent patient characteristics, then CMS' policy will disproportionately affect hospitals taking care of less severely ill patients. Alternatively, if hospital coding practice explains more of the variation in the number of ICD‐9 codes (and thus severity of illness), then the system of adjudicating reimbursement via HACs to incentivize quality of care will be flawed, as there is no standard position for HACs on a more lengthy diagnosis list. Finally, we did not evaluate the change in DRG weight with the reassignment of MS‐DRG if the HAC had been included in the calculation. Future work should evaluate whether there is a differential impact of the policy by change in MS‐DRG weight.

CONCLUSION

Under CMS' current policy, hospitals and physicians caring for patients with lower severity of illness and have an HAC will be penalized by CMS disproportionately more than those caring for more complex, sicker patients with the identical HAC. If, in fact, HACs are indicators of a hospital's quality of care, then the CMS policy will likely do little to foster improved quality unless there is a reduction in coding practice variation and modifications to ensure that the policy impacts reimbursement, independent of severity of illness.

Disclosures

The authors acknowledge the financial support for data acquisition from the Rush University College of Health Sciences. The authors report no conflicts of interest.

One financial incentive to improve quality of care is the Centers for Medicare and Medicaid Services' (CMS) policy to not pay additionally for certain adverse events that are classified as hospital‐acquired conditions (HACs).[1, 2, 3] HACs are specific conditions that occur during the hospital stay and presumably could have been prevented.[4, 5, 6] Under the CMS policy, if an HAC occurs during a patient's stay, that condition is not included in the Medicare Severity Diagnosis‐Related Group (MS‐DRG) assignment.

The MS‐DRG assigned to a patient discharge determines reimbursement. Each MS‐DRG is assigned a weight, which is used to adjust for the fact that the treatment of different conditions consume different resources and have difference costs. Groups of patients who are expected to require above‐average resources have a higher weight than those who require fewer resources, and higher‐weighted MS‐DRG assignment results in a higher payment. In some cases, the inclusion of the diagnosis code of an HAC in the determination of the MS‐DRG results in a higher complexity level and higher DRG weight. The policy is designed to shift the incremental costs associated with treating the HAC to the hospital. As of October 2009, there were 10 HACs included in the CMS nonpayment program (see Supporting Table 1 in the online version of this article). CMS expanded the list of HACs to include 13 conditions in 2013.

Characteristics of Patients With a Hospital‐Acquired Condition Discharged Between October 2007 and April 2008 (N=7,027)
VariableMS‐DRG Change, No. (%) or MSD, N=980No MS‐DRG Change, No. (%) or MSD, N=6,047P Value
  • NOTE: Abbreviations: DVT, deep venous thrombosis; HAC, hospital‐acquired conditions; ICD‐9, International Classification of Diseases, 9th Revision; M, mean; MS‐DRG, Medicare Severity Diagnosis‐Related Group; SD, standard deviation; UTI, urinary tract infection.

Patient sociodemographic characteristics
Age, y62.718.957.521.9<0.001
Race   
White687 (70.1)4,006 (66.3)0.024
Black166 (16.9)1,100 (18.2) 
Hispanic45 (4.6)416 (6.9) 
Other82 (8.4)525 (8.7) 
Sex  <0.001
Male441 (45.0)3,298 (54.5) 
Female539 (55.0)2,749 (45.5) 
Payer  <0.001
Commercial279 (28.5)1,609 (26.6) 
Medicaid88 (9.0)910 (15.1) 
Medicare532 (54.3)3,003 (49.7) 
Self‐pay/charity52 (5.3)331 (5.5) 
Other29 (3.0)194 (3.2) 
Severity of illness  <0.001
Minor50 (5.1)71 (1.2) 
Moderate216 (22.0)359 (5.9) 
Major599 (61.1)1,318 (21.8) 
Extreme115 (11.7)4,299 (71.1) 
Patient clinical characteristics
Number of ICD‐9 diagnosis codes per patient13.76.020.26.6<0.001
MS‐DRG weight2.92.15.96.1<0.001
Hospital characteristics
Mean number of ICD‐9 diagnosis codes per patient per hospital8.51.48.61.40.280
Total hospital discharges15,9576,55316,8576,634<0.001
HACs per 1,000 discharges9.83.710.23.7<0.001
Hospital‐acquired condition
Type of HAC  <0.001
Pressure ulcer334 (34.1)1,599 (26.4) 
Falls/trauma96 (9.8)440 (7.3) 
Catheter‐associated UTI19 (1.9)215 (3.6) 
Vascular catheter infection26 (2.7)1,179 (19.5) 
DVT/pulmonary embolism448 (45.7)2,145 (35.5) 
Other conditions57 (5.8)469 (7.8) 
HAC position  <0.001
2nd code850 (86.7)697 (11.5) 
3rd code45 (4.6)739 (12.2) 
4th code30 (3.1)641 (10.6) 
5th code15 (1.5)569 (9.4) 
6th code or higher40 (4.1)3,401 (56.2) 

Withholding additional reimbursement for an HAC has been controversial. One area of debate is that the assignment of an HAC may be imprecise, in part due to the variation in how physicians document in the medical record.[1, 2, 6, 7, 8, 9] Coding is derived from documentation in physician notes and is the primary mechanism for assigning International Classification of Diseases, 9th Revision, Clinical Modification (ICD‐9) diagnosis codes to the patient's encounter. The coding process begins with health information technicians (ie, medical record coders) reviewing all medical record documentation to assign diagnosis and procedure codes using the ICD‐9 codes.[10] Primary and secondary diagnoses are determined by certain definitions in the hospital setting. Secondary diagnoses can be further separated into complications or comorbidities in the MS‐DRG system, which can affect reimbursement. The MS‐DRG is then determined using these diagnosis and procedure codes. Physician documentation is the principal source of data for hospital billing, because health information technicians (ie, medical record coders) must assign a code based on what is documented in the chart. If key medical detail is missing or language is ambiguous, then coding can be inaccurate, which may lead to inappropriate compensation.[11]

Accurate and complete ICD‐9 diagnosis and procedure coding is essential for correct MS‐DRG assignment and reimbursement.[12] Physicians may influence coding prioritization by either over‐emphasizing a patient diagnosis or by downplaying the significance of new findings. In addition, unless the physician uses specific, accurate, and accepted terminology, the diagnosis may not even appear in the list of diagnosis codes. Medical records with nonstandard abbreviations may result in coder‐omission of key diagnoses. Finally, when clinicians use qualified diagnoses such as rule‐out or probable, the final diagnosis coded may not be accurate.[10]

Although the CMS policy creates a financial incentive for hospitals to improve quality, the extent to which the policy actually impacts reimbursement across multiple HACs has not been quantified. Additionally, if HACsas a policy initiativereflect actual quality of care, then the position of the ICD‐9 code should not affect MS‐DRG assignment. In this study we evaluated the extent to which MS‐DRG assignment would have been influenced by the presence of an HAC and tested the association of the position of an HAC in the list of ICD‐9 diagnosis codes with changes in MS‐DRG assignment.

METHODS

Study Population

This study was a retrospective analysis of all patients discharged from hospital members of the University HealthSystem Consortium's (UHC) Clinical Data Base between October 2007 and April 2008. The data set was limited to patient discharge records with at least 1 of 10 HACs for which CMS no longer provides additional reimbursement (see Supporting Table 1 in the online version of this article). The presence of an HAC was indicated by the corresponding diagnosis code using the ICD‐9 diagnosis and procedure codes.

Data Source

UHC's Clinical Data Base is a database of patient discharge‐level administrative data used primarily for billing purposes. UHC's Clinical Data Base provides comparative data for in‐hospital healthcare outcomes using encounter‐level and line‐item transactional information from each member organization. UHC is a nonprofit alliance of 116 academic medical centers and 276 of their affiliated hospitals.

Dependent Variable: Change in MS‐DRG Assignment

The dependent variable was a change in MS‐DRG assignment. MS‐DRG assignment was calculated by comparing the MS‐DRG assigned when the HAC's ICD‐9 diagnosis code was considered a no‐payment event and was not included in the determination (ie, post‐policy DRG) with the MS‐DRG that would have been assigned when the HAC was not included in the determination (ie, pre‐policy DRG). The list of ICD‐9 diagnosis codes was entered into MS‐DRG grouping software with the ICD‐9 diagnosis code for each HAC in the identical position presented to CMS. Up to 29 secondary ICD‐9 diagnosis and procedure codes were entered, but the analyses of association on the position of the HAC used the first 9 diagnosis and 6 procedure codes processed by CMS, as only codes in these positions would have changed the MS‐DRG assigned during the study time period. If the 2 MS‐DRGs (pre‐policy DRG and post‐policy DRG) did not match, the case was classified as having a change in MS‐DRG assignment (MS‐DRG change).

Independent variables included in this analysis were coding variables and patient characteristics. Coding variables included the total number of ICD‐9 diagnosis codes recorded in the discharge record, absolute position of the HAC ICD‐9 diagnosis code in the order of all diagnosis codes, weight for the actual MS‐DRG, and specific type of HAC. The absolute position of the HAC was included in the analysis as a categorical variable (second position, third, fourth, fifth, and sixth position and higher). In addition, patient‐level characteristics including sociodemographic characteristics, clinical factors and severity of illness (minor, moderate, major, extreme),[6] and hospital‐level characteristics.

Statistical Analysis

Means and standard deviations or frequencies and percentages were used to describe the variables. A 2 test was used to test for differences in the absolute position of the HAC with change in MS‐DRG assignment (change/no change). In addition, 2 tests were used to test for differences in each of the other categorical independent variables with change in MS‐DRG assignment; t tests were used to test for differences in the continuous variables with change in MS‐DRG assignment.

Two multivariable binary logistic regression models were fit to test the relationship between change in MS‐DRG assignment with the absolute position of the HAC, adjusting for coding variables, patient characteristics, and hospital characteristics that were associated with change in MS‐DRG assignment in the bivariate analysis. The first model tested the relationship between change in MS‐DRG and position of the HAC, without accounting for the specific type of HAC, and the second tested the relationship including both position and the specific type of HAC. Receiver operating characteristic (ROC) curves were developed for each model to evaluate the predictive accuracy. Additionally, analyses were stratified by severity of illness, and the areas under the ROC curves for 3 models were compared to determine whether the predictive accuracy increased with the inclusion of variables other than HAC position. The first model included HAC position only, the second model added type of HAC, and the third model added other coding variables and patient‐ and hospital‐level variables.

Two sensitivity analyses were performed to test the robustness of the results. The first analysis tested the sensitivity of the results to the specification of comorbid disease burden, as measured by number of diagnosis codes. We used Elixhauser's method[13] for identifying comorbid conditions to create binary variables indicating the presence or absence of 29 distinct comorbid conditions, then calculated the total number of comorbid conditions. The binary logistic regression model was refit, with the total number of comorbid conditions in place of the number of diagnosis codes. An additional binary logistic regression model was fit that included the individual comorbid conditions that were associated with change in MS‐DRG assignment in a bivariate analysis (P<0.05). The second sensitivity analysis evaluated whether hospital‐level variation in coding practices explained change in MS‐DRG assignment using a hierarchical binary logistic regression model that included hospital as a random effect.

All statistical analyses were conducted using the SAS version 9.2 statistical software package (SAS Institute Inc., Cary, NC). The Rush University Medical Center Institutional Review Board approved the study protocol.

RESULTS

Of the 954,946 discharges from UHC academic medical centers, 7027 patients (0.7%) had an HAC. Of the patients with an HAC, 6047 did not change MS‐DRG assignment, whereas 980 patients (13.8%) had a change in MS‐DRG assignment. Patients with a change in MS‐DRG assignment were significantly different from those without a change in MS‐DRG assignment on all patient‐level characteristics and all but 1 hospital characteristic (Table 1). The variable with the largest absolute difference between those with and without a change in MS‐DRG was the actual position of the HAC; 86.7% of those with an MS‐DRG change had their HAC in the second position, whereas those without a change had only 11.5% in the second position.

After controlling for patient and hospital characteristics, an HAC in the second position in the list of ICD‐9 codes was associated with the greatest likelihood of a change in MS‐DRG assignment (P<0.001) (Table 2). Each additional ICD‐9 code decreased the odds of an MS‐DRG change (P=0.004), demonstrating that having more secondary diagnosis codes was associated with a lesser likelihood of an MS‐DRG change. After including the individual HACs in the regression model, the second position remained associated with the likelihood of a change in MS‐DRG assignment (results not shown). The predictive accuracy of our model did not improve, however, with the addition of type of HAC. The area under the ROC curve was 0.94 in both models, indicating high predictive power.

Results of Binary Logistic Regression Model for Change in MS‐DRG Assignment (N=7,027)
InterceptOdds RatioP Value
  • NOTE: The reference category for includes extreme severity of illness and HAC ICD‐9 code in the 6th position or higher. The model controls for patient age, sex, race/ethnicity, primary payer, hospital HAC rate, and total number of discharges per hospital. Abbreviations: HAC, hospital‐acquired conditions; ICD‐9, International Classification of Diseases, 9th Revision; MS‐DRG, Medicare Severity Diagnosis‐Related Group; ROC, receiver operating characteristic. *Compared to the model with patient sociodemographic characteristics only.

Minor severity of illness6.80<0.001
Moderate severity of illness5.52<0.001
Major severity of illness8.02<0.001
Number of ICD‐9 diagnosis codes per patient0.970.004
HAC ICD‐9 diagnosis code in 2nd position40.52<0.001
HAC ICD‐9 diagnosis code in 3rd position1.820.009
HAC ICD‐9 diagnosis code in 4th position1.720.032
HAC ICD‐9 diagnosis code in 5th position1.150.662
Area under the ROC curve0.94<0.001*
Area under the ROC curve, model with patient socio‐demographic characteristics only0.85 

The proportion of cases with a change in MS‐DRG by severity of illness is reported in Table 3. The largest proportion of cases with a change in MS‐DRG was in the minor severity of illness category (41.3%), whereas only 2.6% of cases with an extreme severity of illness had a change in MS‐DRG. Figure 1 shows ROC curves stratified by severity of illness. Figure 1A illustrates the ROC curves for the 121 (1.7%) patients with minor severity of illness. The area under the ROC curve for the model including HAC position only was 0.74, indicating moderate predictive power. The inclusion of HAC type increased the predictive power to 0.91, and inclusion of sociodemographic characteristics further increased the predictive power to 0.95. Figure 1BD illustrates the ROC curves for moderate, major, and extreme severities of illness. For more severe illnesses, the predictive accuracy of the models with only HAC position were similar to the full models, demonstrating that HAC position alone had a high predictive power for change in MS‐DRG assignment.

Percentage of Patients With a Change in MS‐DRG by Severity of Illness, Discharges Between October 2007 and April 2008 (N=7,027)
VariableNo.Within Category Percent With MS‐DRG Change
  • NOTE: Abbreviations: MS‐DRG, Medicare Severity Diagnosis‐Related Group.

Severity of illness  
Minor12141.3
Moderate57537.6
Major1,91731.3
Extreme4,4142.6
Figure 1
Receiver operating characteristic (ROC) curves stratified by severity of illness. ROC curves by severity of illness. Abbreviations: AUC, area under the curve; HAC, hospital‐acquired conditions.

In a sensitivity analysis that evaluated the robustness of our results to the specification of disease burden, inclusion of the number of comorbid conditions did not improve the predictive accuracy of the model. Although inclusion of individual comorbid conditions rather than number of diagnosis codes attenuated the odds ratio (OR) for HAC position (OR: 40.5 in the original model vs OR: 32.9 in the model with individual comorbid conditions), the improvement of the predictive accuracy of the model was small (area under the ROC curve=0.936 in the original model vs 0.943 in the model with individual conditions, P<0.001) (results not shown). In a sensitivity analysis using a hierarchical logistic regression model that included hospital random effects, hospital‐level variation in coding practices did not attenuate the relationship between HAC position and MS‐DRG change (results not shown).

DISCUSSION

This study investigated the association of a change in MS‐DRG assignment and position of the ICD‐9 diagnosis codes for HACs in a sample of patients discharged from US academic medical centers. We found that only 14% of the MS‐DRGs for patients with an HAC would have experienced a change in DRG assignment. Our results are consistent with those of Teufack et al.,[14] who estimated the economic impact of CMS' HAC policy for neurosurgery services at a single hospital to be 0.007% of overall net revenues. Nevertheless, the majority of hospitals have increased their efforts to prevent HACs that are included in CMS' policy.[15] At the same time, most hospitals have not increased their budgets for preventing HACs, and instead have reallocated resources from nontargeted HACs to those included in CMS' policy.

The low proportion of records that are impacted by the policy may be partially explained by the fact that CMS' policy only has an impact on reimbursement for MS‐DRGs with multiple levels. For example, heart failure has 3 levels of reimbursement in the MS‐DRG system (Table 4). Prior to CMS' policy, a heart failure patient with an air embolism as an HAC would have been classified in the most severe MS‐DRG (291), whereas after implementation the patient would be classified in the least severe MS‐DRG, if no other complication or comorbidity (CC) or a major complication or comorbidity (MCC) were present. Chest pain has only 1 level, and reimbursement for a patient with an HAC and classified in the chest pain MS‐DRG would not be impacted by CMS' policy. Most hospitalized patients are complicated, and the proportion of patients who are complicated will continue to increase over time as less complex care shifts to the ambulatory setting. The relative effectiveness of CMS' policy is likely to diminish with the continued shift of care to the ambulatory setting.

Example of MS‐DRG Codes and Weights, Fiscal Year 2014
VariableMS‐DRGDRG Weight
  • NOTE: Abbreviations: DRG, Diagnosis‐Related Group; MS‐DRG, Medicare Severity Diagnosis‐Related Group.

Heart failure and shock  
With major complications and comorbidities (MS‐DRG 291)2911.5062
With complications and comorbidities2920.9952
Without major complications or comorbidities2930.6718
Chest pain3130.5992

Patient discharges with a diagnosis code for as HAC in the second position were substantially more likely to have a change in MS‐DRG assignment compared to cases with an HAC listed lower in the final list of diagnosis codes. Perhaps it is not surprising that MS‐DRG assignment is most likely to change when the HAC is in the second position, because an ICD‐9 diagnosis code in this position is more likely to be a major complication or comorbidity. For HACs listed in a lower position of the list of ICD‐9 diagnosis codes, it is likely that the patient had another major complication or comorbidity listed in the second position that would have maintained classification in the same MS‐DRG. Our results suggest that physicians and hospitals caring for patients with lower complexity of illness will sustain a higher financial burden as a result of an HAC under CMS' policy compared to providers whose patients sustain the exact same HAC but have underlying medical care of greater complexity.

These results raise further concerns about the ability of CMS' payment policy to improve quality. One criticism of CMS' policy is that all HACs are not universally preventable. If they are not preventable, payment reductions promulgated via the policy would be punitive rather than incentivizing. In their study of central catheter‐associated bloodstream infections and catheter‐associated urinary tract infections, for example, Lee et al. found no change in infection rates after implementation of CMS' policy.[16] As such, some have suggested HACs should not be used to determine reimbursement, and CMS should abandon its current nonpayment policy.[4, 17] Our findings echo this criticism given that the financial penalty for an HAC depends on whether a patient is more or less complex.

Because coding emanates from physician documentation, a uniform documentation process must exist to ensure nonvariable coding practices.[1, 2, 7, 9] This is not the case, however, and some hospitals comanage documentation to refine or maximize the number of ICD‐9 diagnosis and procedure codes. Furthermore, there are certain differences in the documentation practices of individual physicians. If physician documentation and coding variation leads to fewer ICD‐9 codes during an encounter, the chance that an HAC will influence MS‐DRG change increases.

Another source of variation in coding practices found in this study was code sequencing. Although guidelines for appropriate ICD‐9 diagnosis coding currently exist, individual subjectivity remains. The most essential step in the coding process is identifying the principal diagnosis by extrapolating from physician documentation and clinical data. For example, when a patient is admitted for chest pain, and after some evaluation it is determined that the patient experienced a myocardial infarction, then myocardial infarction becomes the principal diagnosis. Based on that principal diagnosis, coders must select the relevant secondary diagnoses. The process involves a series of steps that must be followed exactly in order to ensure accurate coding.[12] There are no guidelines by which coding personnel must follow to sequence secondary diagnoses, with the exception of listed MCCs and CCs prior to other secondary diagnoses. Ultimately, the order by which these codes are assigned may result in unfavorable variation in MS‐DRG assignment.[1, 2, 4, 7, 8, 9, 17]

There are a number of limitations to this study. First, our cohort included only UHC‐affiliated academic medical centers, which may not represent all acute‐care hospitals and their coding practices. Although our data are for discharges prior to implementation of the policy, we were able to analyze the anticipated impact of the policy prior to any direct or indirect changes in coding that may have occurred in response to CMS' policy. Additionally, the number of diagnosis codes accepted by CMS was expanded from 9 to 25 in 2011. Future analyses that include MS‐DRG classifications with the expanded number of diagnosis codes should be conducted to validate our findings and determine whether any changes have occurred over time. It is not known whether low illness severity scores signify patient or hospital characteristics. If they represent patient characteristics, then CMS' policy will disproportionately affect hospitals taking care of less severely ill patients. Alternatively, if hospital coding practice explains more of the variation in the number of ICD‐9 codes (and thus severity of illness), then the system of adjudicating reimbursement via HACs to incentivize quality of care will be flawed, as there is no standard position for HACs on a more lengthy diagnosis list. Finally, we did not evaluate the change in DRG weight with the reassignment of MS‐DRG if the HAC had been included in the calculation. Future work should evaluate whether there is a differential impact of the policy by change in MS‐DRG weight.

CONCLUSION

Under CMS' current policy, hospitals and physicians caring for patients with lower severity of illness and have an HAC will be penalized by CMS disproportionately more than those caring for more complex, sicker patients with the identical HAC. If, in fact, HACs are indicators of a hospital's quality of care, then the CMS policy will likely do little to foster improved quality unless there is a reduction in coding practice variation and modifications to ensure that the policy impacts reimbursement, independent of severity of illness.

Disclosures

The authors acknowledge the financial support for data acquisition from the Rush University College of Health Sciences. The authors report no conflicts of interest.

References
  1. Centers for Medicare and Medicaid Services. Hospital‐acquired conditions (present on admission indicator). Available at: http://www.cms.hhs.gov/HospitalAcqCond/05_Coding.asp#TopOfPage. Updated 2012. Accessed September 20, 2012.
  2. Centers for Medicare and Medicaid Services. Hospital‐acquired conditions: coding. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/HospitalAcqCond/Coding.html. Updated 2012. Accessed February 2, 2012.
  3. ICD‐9‐CM 2009 Coders' Desk Reference for Procedures. Eden Prairie, MN: Ingenix; 2009.
  4. Averill RF, Hughes JS, Goldfield NI, McCullough EC. Hospital complications: linking payment reduction to preventability. Jt Comm J Qual Patient Saf. 2009;35(5):283285.
  5. McNutt R, Johnson TJ, Odwazny R, et al. Change in MS‐DRG assignment and hospital reimbursement as a result of Centers for Medicare
References
  1. Centers for Medicare and Medicaid Services. Hospital‐acquired conditions (present on admission indicator). Available at: http://www.cms.hhs.gov/HospitalAcqCond/05_Coding.asp#TopOfPage. Updated 2012. Accessed September 20, 2012.
  2. Centers for Medicare and Medicaid Services. Hospital‐acquired conditions: coding. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/HospitalAcqCond/Coding.html. Updated 2012. Accessed February 2, 2012.
  3. ICD‐9‐CM 2009 Coders' Desk Reference for Procedures. Eden Prairie, MN: Ingenix; 2009.
  4. Averill RF, Hughes JS, Goldfield NI, McCullough EC. Hospital complications: linking payment reduction to preventability. Jt Comm J Qual Patient Saf. 2009;35(5):283285.
  5. McNutt R, Johnson TJ, Odwazny R, et al. Change in MS‐DRG assignment and hospital reimbursement as a result of Centers for Medicare
Issue
Journal of Hospital Medicine - 9(11)
Issue
Journal of Hospital Medicine - 9(11)
Page Number
707-713
Page Number
707-713
Article Type
Display Headline
Association of the position of a hospital‐acquired condition diagnosis code with changes in medicare severity diagnosis‐related group assignment
Display Headline
Association of the position of a hospital‐acquired condition diagnosis code with changes in medicare severity diagnosis‐related group assignment
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Tricia Johnson, PhD, Department of Health Systems Management, Rush University Medical Center, 1700 West Van Buren Street, TOB Suite 126B, Chicago, IL 60612; Telephone: 312‐942‐7107; Fax: 312‐942‐4957; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Diagnosis Discrepancies and LOS

Article Type
Changed
Sun, 05/28/2017 - 21:48
Display Headline
Discrepancy between admission and discharge diagnoses as a predictor of hospital length of stay

Recent research has found that the addition of clinical data to administrative data strengthens the accuracy of predicting inpatient mortality.1, 2 Pine et al.1 showed that including present on admission (POA) codes and numerical laboratory data resulted in substantially better fitting risk adjustment models than those based on administrative data alone. Risk adjustment models, despite improvement with the use of POA codes, are still imperfect and severity adjustment alone does not explain differences in mortality as well as we would hope.2

The addition of POA codes improves prediction of mortality, since they distinguish between conditions that were present at the time of admission and conditions that were acquired during the hospitalization, but it is not known if the addition of these codes is related to other measures of hospital performancesuch as differences in length of stay (LOS). Which of the factors related to the patient's clinical condition at the time of hospital admission drive differences in outcomes?

A patient's admission diagnosis may be an important piece of information that accounts for differences in hospital care. A patient's diagnosis at the time of hospital admission leads to the initial course of treatment. If the admitting diagnosis is inaccurate, a physician may spend critical time following a course of unneeded treatment until the correct diagnosis is made (reflected by a discrepancy between the admitting and discharge diagnosis codes). This discrepancy may be a marker of the fact that, while some patients are admitted to the hospital for treatment of a previously diagnosed condition, other patients require a diagnostic workup to determine the clinical problem.

A discrepancy may also reflect poor systems of documenting critical information and result in delays in care, with potentially serious health consequences.3, 4 If diagnosis discrepancy is a marker of difficult‐to‐diagnose cases, leading to delays in care, we may be able to improve our understanding of perceived differences in the production of high‐quality medical care and proactively identify cases which need more attention at admission to ensure that necessary care is provided as quickly as possible.

Almost universally, comparisons of hospital performance are risk‐adjusted to account for differences in case mix and severity across institutions. These risk‐adjustment models rely on discharge diagnoses to adjust for clinical differences among patients, even though recent research has shown that models using discharge diagnoses alone are inadequate predictors of variation in mortality among hospitals. While the findings of Pine et al.1 suggest the need to add certain clinical information, such as laboratory values, to improve these models, this information may be costly for some institutions to collect and report. We aimed to explore whether other simple to measure factors that are independent of the quality of care provided and routinely collected by hospitals' electronic information systems can be used to improve risk‐adjustment models. To assess the potential of other routinely collected diagnostic information in explaining differences in health outcomes, this study examined whether a discrepancy between the admission and discharge diagnoses was associated with hospital LOS.

Patients and Methods

Patient Population

The sample included all patients age 18 years and older who were admitted to and discharged from the general medicine units at Rush University Medical Center between July 2005 and June 2006. We further limited the sample to patients who were admitted via the emergency department (ED) or directly by their physician, excluding patients with scheduled admissions for which LOS may vary little and patients transferred from other hospitals. We also excluded patients admitted directly to the intensive care units. However, some patients were transferred to the intensive care units during their stay and we retained these patients. Only a small percent of cases fit this designation (1.2%). We did not explore the effects of this clinical situation due to small numbers of patients. Our attempt was to constitute a sample that would include patients for whom admission is more likely for an episodic and diagnostically complex set of symptoms and signs.

Diagnosis Discrepancy

Admission and discharge diagnosis codes were classified using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD‐9‐CM). An admission diagnosis is routinely documented and coded by hospitals but is not used by most private and public payers for reimbursement purposes, unlike the discharge diagnosis codes. The admission diagnosis code summarizes information known at the time the patient is admitted to the hospital and corresponds to the chief complaint in the history and physical report. Its specificity may depend on a variety of patient and physician‐related factors, and neither the quality of the information collected at admission nor the specificity of the coded information is externally regulated. Only one admission diagnosis code is captured and, like the discharge diagnosis codes, coded at the time of discharge. The admission diagnosis code reflects the amount of information known at the time of admission but is retrospectively coded.

A patient may have multiple discharge diagnosis codes. These codes summarize information collected throughout a hospitalization. The discharge diagnosis codes are used to bill third‐party payers and patients. In addition, governmental agencies, benchmarking institutions, and researchers use the discharge diagnosis codes to classify a patient's condition, identify comorbidities, and measure severity of illness.

We measured discrepancy between admission and discharge diagnoses in two ways. We first compared the admitting diagnosis code with the principal discharge diagnosis code. A match was defined as a patient record in which the two codes were exactly the same at the terminal digit. If the two codes did not match exactly at the terminal digit, we classified the patient as having a discrepancy or mismatch between diagnosis codes. For example, if the admitting diagnosis code was 786.05 (shortness of breath) and the principal discharge diagnosis code was 428 (congestive heart failure, unspecified), the diagnosis codes were classified as discrepant. To test the robustness of our definition of discrepancy between admitting and discharge diagnoses, we created a second variable that compared the admitting diagnosis code with the first five discharge diagnosis codes. If the admitting diagnosis code did not match any of the first five discharge diagnosis codes, the diagnosis codes were classified as discrepant.

We use the term diagnosis discrepancy to refer to records that have a mismatch between admitting and discharge diagnosis codes.

Models and Data Collection

The outcome of interest was inpatient LOS. The primary independent variable was whether the patient record had a discrepancy between the admitting and discharge diagnosis codes.

Our models controlled for the following variables: age; sex; admission source (ED or primary care provider); primary source of insurance (Medicare, Medicaid, or commercial coverage); and severity of illness, measured by the number of comorbid conditions.5, 6 We also controlled for the general type of clinical condition, which was classified by the principal discharge diagnosis code using the Healthcare Cost and Utilization Project's Clinical Classifications Software 2007.7 Data were collected from the institution's clinical data warehouse.

Statistical Analysis

A generalized linear regression model fit with a negative binomial distribution was used to test for an association between inpatient LOS and a discrepancy between admitting and discharge diagnosis codes, adjusting for the variables described above. We reestimated our models without the respective diagnosis discrepancy variable and calculated a likelihood ratio test statistic for the two models to determine whether the addition of diagnosis discrepancy significantly improved our models.

We used two sensitivity tests to assess the specification of our models. First, we included two interaction terms: one for diagnosis discrepancy and ED admissions, to assess whether the association between diagnosis discrepancy and LOS differed by admission source; and another for diagnosis discrepancy and the number of comorbidities, to assess whether the association between diagnosis discrepancy and LOS differed by level of patient complexity. Second, we incrementally broadened our definition of a match in admitting and discharge diagnoses by comparing the admitting diagnosis with the first two discharge diagnoses, then the first three discharge diagnoses, through the 10th discharge diagnosis, and reestimated the regression models using the successively broader definition of match (principal, first two, first three, first four, through the first 10 discharge diagnoses) to further assess the robustness of our measurement of diagnosis discrepancy as a predictor of LOS.

Results

Of the 5,375 patients discharged between July 2005 and June 2006, 75.6% had a discrepancy between their admitting and principal discharge diagnosis. Patients with a discrepancy between their admitting and principal discharge diagnosis codes had significantly longer LOS, were older, had more comorbid conditions, and were more likely to be male, admitted through the ED, and have Medicare (Table 1). Results were similar for the more encompassing definition of a discrepancy between admitting and the top 5 discharge diagnoses (results not shown).

Sample Characteristics by Presence or Absence of a Discrepancy Between Admission Diagnosis and Principal Discharge Diagnosis
VariablesnNo Discrepancy (n = 1,313)Discrepancy (n = 4,062)P*
  • NOTE: Number of patients (n) = 5,375. Numbers in parentheses are standard errors (SEs).

  • Significance.

LOS (days), mean (SE) 3.4 (3.6)4.2 (4.1)<0.001
Age (years), mean (SE) 56.3 (18.8)59.7 (18.6)<0.001
Comorbid conditions (number), mean (SE) 1.2 (1.2)1.4 (1.3)<0.001
Gender (%)   0.019
Male2,20129.870.3 
Female3,17426.173.9 
Admission source (%)    
Direct4,20229.870.3<0.001
ED1,17322.977.1 
Insurance coverage    
Medicare2,67721.678.4<0.001
Medicaid90826.373.7 
Commercial1,79027.772.3 
Clinical domain (%)   <0.001
Endocrine37022.777.3 
Nervous system23035.764.4 
Circulatory1,00819.081.1 
Respiratory48316.483.6 
Digestive85214.285.8 
Genitourinary37219.680.4 
Skin24953.047.0 
Musculoskeletal27620.379.7 
Injury/poisoning54927.972.1 
Other98634.765.3 

Table 2 reports the 10 most common admitting diagnoses that did not match the principal discharge diagnosis code and the 10 most common principal discharge diagnoses that did not match the admitting diagnosis code. The top 10 discrepant admitting diagnosis codes represented nearly one‐half of all cases with a discrepancy between the admitting and discharge diagnoses. The top 10 principal discharge diagnosis codes represented 23% of all discrepant diagnoses. Table 3 lists the 10 most common pairs of mismatched admitting and principal discharge diagnosis codes. The most common mismatched pair was a principal admitting diagnosis code of 786.05 (shortness of breath) and discharge diagnosis code of 428.0 (congestive heart failure, unspecified).

Ten Most Common Discrepant Admission and Principal Discharge Diagnosis Codes
RankAdmission Diagnosis Code Not Matching Primary Discharge DiagnosisRankPrincipal Discharge Diagnosis Code Not Matching Admission Diagnosis Code
CodeDescription%CodeDescription%
1786.05Shortness of breath11.11428.0Congestive heart failure, unspecified6.0
2789.00Abdominal pain, unspecified site8.52486Pneumonia, organism unspecified3.3
3780.6Fever6.73584.9Acute renal failure, unspecified2.2
4786.50Chest pain, unspecified5.64786.59Chest pain, other2.1
5787.01Nausea without vomiting3.95599.0Urinary tract infection, site not specified2.1
6780.99Other general symptoms3.46996.81Complications of kidney transplant1.8
7780.79Other malaise and fatigue3.07577.0Acute pancreatitis1.7
8780.2Syncope and collapse2.68996.62Infection and inflammatory reaction due to other vascular device, implant or graft1.4
9729.5Pain in limb2.19434.91Cerebral artery occlusion with cerebral infarction, unspecified1.3
10729.81Swelling of limb2.010008.8Intestinal infection, not elsewhere classified1.0
Ten Most Common Pairs of Discrepant Admission and Primary Discharge Diagnosis Codes
Admission DiagnosisPrincipal Discharge Diagnosis
CodeDescriptionCodeDescription
786.05Shortness of breath428.0Congestive heart failure, unspecified
786.50Chest pain, unspecified786.59Chest pain, other
786.05Shortness of breath486Pneumonia, organism unspecified
780.6Fever486Pneumonia, organism unspecified
780.6Fever996.62Infection and inflammatory reaction due to other vascular device, implant or graft
789.00Abdominal pain, unspecified site577.0Acute pancreatitis
780.6Fever599.0Urinary tract infection, site not specified
786.05Shortness of breath491.21Obstructive chronic bronchitis with acute exacerbation
786.05Shortness of breath415.19Pulmonary embolism and infarction, other
786.05Shortness of breath493.22Chronic obstructive asthma, with acute exacerbation

Table 4 reports the results of the generalized linear model predicting LOS. Discrepancy between the admitting and principal discharge diagnoses was associated with a 22.5% longer LOS (P < 0.01), translating into a 0.76‐day increase at the mean for those with discrepant diagnoses. Our results are robust to our definition of discrepancy between admitting and discharge diagnoses. Using the discrepancy definition based on the top five discharge diagnosis codes, a discrepancy between admitting and discharge diagnoses was associated with a 15.4% longer LOS (P < 0.01), translating into a 0.52‐day increase. Results of the likelihood ratio test showed that the addition of diagnosis discrepancy significantly improved the fit of the regression models using both the principal and top five discharge diagnosis codes.

Results for Generalized Linear Regression Model Predicting LOS (n = 5,375)
VariableCoefficient
Model 1Model 2Model 3
  • NOTE: Model 1 excludes diagnosis discrepancy variable; model 2 includes diagnosis discrepancy variable using the principal discharge diagnosis code; model 3 includes diagnosis discrepancy variable using the first 5 discharge diagnosis codes. Omitted category includes match in admitting and discharge diagnoses (in models 2 and 3), male, direct admission and commercial insurance coverage. Models control for clinical domain. Generalized linear models are estimated with a negative binomial distribution. Standard errors (SEs) are shown in parentheses.

  • Significance at the 1% level or better.

Intercept0.98* (0.06)0.84* (0.06)0.89* (0.06)
Diagnosis discrepancy with principal discharge diagnosis 0.20* (0.03) 
Diagnosis discrepancy with top 5 discharge diagnoses  0.14* (0.02)
Age0.001 (0.001)0.001 (0.001)0.001 (0.001)
Female0.03 (0.02)0.03 (0.02)0.03 (0.02)
Emergency department admission0.02 (0.03)0.03 (0.03)0.03 (0.03)
Medicare0.15* (0.03)0.15* (0.03)0.15* (0.03)
Medicaid0.04 (0.03)0.04 (0.03)0.05 (0.03)
Number of comorbid conditions0.13* (0.01)0.13* (0.01)0.13* (0.01)
Log likelihood for model11737.2311797.5411771.76
Likelihood ratio test statistic 120.62*69.06*

Broadening our definition of a match between admitting and discharge diagnosis codes from matching only on the principal discharge diagnosis code to the first 10 discharge diagnosis codes showed that even when using the first 10 discharge diagnoses, a diagnosis discrepancy still significantly increased LOS. The magnitude weakened, however, as the definition of a match in diagnosis codes was broadened, ranging from 22.5% when including the principal discharge diagnosis code only to 12.1% when including the first 10 discharge diagnosis codes (Figure 1).

Figure 1
Association between discrepancy in admission and discharge diagnoses and LOS for the first 10 discharge diagnosis codes (n = 5,375).

Discussion

Discrepancy between admitting and discharge diagnosis codes was associated with a large increase in LOS, even after controlling for age, sex, admission source, insurance, number of comorbid conditions, and clinical domain. This discrepancy translated into an increase of 0.76 days in LOS per general medicine patient, nearly two‐thirds larger than the increase in LOS of 0.47 days associated with having one comorbid condition, and equated to 4,102 additional patient days for the 5,375 general internal medicine patients admitted.

The relative and absolute increase in LOS associated with a diagnosis discrepancy is considerably larger than that associated with measures of comorbid illness found in other studies. In a study examining the predictive power of comorbidity measures based on diagnosis codes and outpatient pharmacy records, Parker et al.8 found that the inclusion of comorbid conditions based on only discharge diagnosis codes was associated with up to a 0.28‐day increase in LOS, and the further inclusion of comorbidity markers based on pharmacy data was associated with up to an additional 0.09‐day LOS. In a study comparing different measures of disease severity and comorbidities in predicting LOS for total knee replacement patients, Melfi et al.9 found that the addition of one diagnosis code was associated with a 3.3% increase in LOS. Similarly, Kieszak et al.10 found that the likelihood of having an LOS greater than 10 days increased two‐fold for patients with carotid endartectomy and at least one comorbidity.

While a discrepancy between the admitting and discharge diagnosis codes was consistently associated with an increased LOS, the underlying reasons are not yet understood. We can only speculate about the reasons for this association, and further work is needed to test these hypotheses. There are several possible explanations for discrepant cases: (1) poorer documentation at the time of admission, (2) more complexity in terms of the diagnostic task, and (3) less thorough diagnostic workup at the time of admission.

First, we do not think that poor documentation at the time of admission is the most likely explanation. Our ED uses documentation templates for all admitted patients, hence equalizing the amount of documentation for many patients. However, the main reason we do not think this is the reason for discrepancy is that diagnosis codes at the time of admission via the ED are assigned by physicians and not those who code based on documented information.

We do think that the most likely reason is that patients with discrepant diagnoses are truly harder to diagnose cases. For example, we assume that the time to provide care to patients once admitted is the same regardless of the ED or preadmission triage. For example, assume all patients are seen nearly as soon as admitted and the workup promptly ensues. Hence, under these conditions, variation in LOS may be due to more care needed for the most severely ill. If this assertion is true, our finding is a new one and adds a new candidate variable to explain variation in care due to patient severity (beyond comorbid illness, which we controlled for). We think we are showing that diagnostic uncertainty is a common, previously unexamined component of the complexity of clinical presentations (we propose that diagnosis discrepancy is a complexity variable rather than a comorbid, severity of illness variable). For example, discrepancy between admitting and discharge diagnosis codes could be due to other patient characteristics such as a patient's inability to communicate his or her symptoms to the physician due to language or cultural barriers.

However, regarding the third possible reason, if the ED or the preadmission setting fails to provide diagnostic services prior to admission for those patients with discrepant diagnoses regardless of diagnostic complexity, then our finding is a hospital or system performance variable. Those patients with discrepant diagnoses may have had a less thorough workup prior to admission leading to more workup being needed during the admission.

Regardless of the reason (perhaps all three reasons are involved at some level), our study points to a new component of patient care variations. We hope our finding spurs future research efforts. We are about to embark on a comparison of patients with identical discharge diagnoses but discrepant or not discrepant admission diagnoses to explore variations in the amount/type of diagnostic and treatment plans provided both before and during hospitalization.

In further support of diagnosis complexity as the reason for discrepancy is that the codes on admission for discrepantly coded patients are nonspecific, symptom or sign diagnoses (ie, shortness of breath, abdominal pain) while discharge diagnoses are more specific (ie, congestive heart failure, pancreatitis) (Tables 2 and 3). The nonspecific nature of the preliminary codes likely signifies more clinically complex situations and when noted, over and above previously described risk adjustment models, the discrepancy portends more healthcare needs. For patients admitted without a clear diagnosis of a clinical problem, diagnostic workups may be more complex and require longer hospitalization. For these patients, a longer LOS may not be a marker of poor quality of care, but instead the lack of critical information present at the time of admission.

Our comparison of the association between LOS and a discrepancy in diagnosis codes when the admitting diagnosis code was successively matched to a larger number of discharge diagnosis codes suggests that LOS increases not only when the admitting diagnosis is incorrect or not sufficiently specific, but also when the admitting diagnosis is correct, but not the principal discharge diagnosis. Taken together, these findings suggest that delays in care may result from lack of clear patient diagnostic information at the time of admission.

Our findings may advance the understanding of variations in hospital care from two standpoints. First, noting the discrepant diagnoses may significantly improve prediction in health services research studies examining variations in hospital performance, even beyond the addition of POA coding. Second, and perhaps more importantly, prospectively identifying patients at the time of admission with the nonspecific, preliminary codes identified in our study may allow physicians to target earlier in care patients with more demanding care needs. We realize, however, that before we could use this information to prospectively attempt to improve care, coding would have to be done at admission rather than discharge. At our site, this is true in the ED setting. Patients are assigned an admission diagnosis code as they leave the ED and this code is carried through to discharge without alteration. A nonspecific admission code could, for example, alert those taking care of the patient in the hospital that this is perhaps a more complex clinical situation requiring earlier consultation. Concurrent coding could also jumpstart studies to better understand whether what we have found in this preliminary study is due to poor assessment or difficult patient situations. However, this contingency may not be possible for those admitted directly from physician offices, as both the admission and discharge codes are determined at the time of discharge and based on documentation. Yet, on admission, a chief complaint is provided that may serve the same purpose as an admission diagnosis code if they are sufficiently in agreement.

Our study has limitations. It is from a single medical center and uses administrative data alone. We did not have access to clinical records for more detailed information about the content and completeness of medical records at the time of admission. Our observations should be tested in other hospital systems. Another limitation may be that we focus on discrepancy and not on those patients without a discrepancy. However, the aim of testing for discrepancy is to focus on improvement. Conducting a more in‐depth chart review of patients with similar final diagnoses, some with discrepant codes and others with nondiscrepant codes, may be a way to assess the reasons why LOS varied in the two groups. The next step, should our observations be confirmed, is to systematically assess whether other characteristics exist that differentiate cases in which a discrepancy between diagnosis codes is due to diagnostic uncertainty from those in which it is due to diagnostic oversight or error. A method to systematically identify conditions at admission that are likely to be misdiagnosed or have a delay in diagnosis may substantially improve the overall quality of care provided in the hospital.

References
  1. Pine M,Jordan HS,Elixhauser A, et al.Enhancement of claims data to improve risk adjustment of hospital mortality.JAMA.2007;297:7176.
  2. Iezzoni LI.The risks of risk adjustment.JAMA.1997;278:16001607.
  3. Jencks SF,Huff ED,Cuerdon T.Change in the quality of care delivered to Medicare beneficiaries, 1998–1999 to 2000–2001.JAMA.2003;289:305312.
  4. Graff LG,Wang Y,Borkowski B, et al.Delay in the diagnosis of acute myocardial infarction: Effect on quality of care and its assessment.Acad Emerg Med.2006;13:931938.
  5. Rochon PA,Katz JN,Morrow LA, et al.Comorbid illness is associated with survival and length of hospital stay in patients with chronic disability. A prospective comparison of three comorbidity indices.Med Care.1996;34:10931101.
  6. Iezzoni LI,Foley SM,Daley J,Hughes J,Fisher ES,Heeren T.Comorbidities, complications and coding bias: does the number of diagnosis codes matter in predicting in‐hospital mortality?JAMA.1992;267:21972203.
  7. Elixhauser A,Steiner C,Palmer L. Clinical classifications software (CCS), 2007. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December2008.
  8. Parker JP,McCombs JS,Graddy EA.Can pharmacy data improve prediction of hospital outcomes? Comparison with a diagnosis‐based comorbidity measure.Med Care.2003;41:407419.
  9. Melfi C,Holleman E,Arthur D,Katz B.Selecting a patient characteristics index for the prediction of medical outcomes using administrative data.J Clin Epidemiol.1995;48:917926.
  10. Kieszak SM,Flanders WD,Kosinski AS,Shipp CC,Karp H.A comparison of the Charlson Comorbidity Index derived from medical record data and administrative billing data.J Clin Epidemiol.1999;52:137142.
Article PDF
Issue
Journal of Hospital Medicine - 4(4)
Page Number
234-239
Legacy Keywords
administrative data, diagnosis codes, diagnosis discordance, diagnostic uncertainty, health services research, hospital care, length of stay
Sections
Article PDF
Article PDF

Recent research has found that the addition of clinical data to administrative data strengthens the accuracy of predicting inpatient mortality.1, 2 Pine et al.1 showed that including present on admission (POA) codes and numerical laboratory data resulted in substantially better fitting risk adjustment models than those based on administrative data alone. Risk adjustment models, despite improvement with the use of POA codes, are still imperfect and severity adjustment alone does not explain differences in mortality as well as we would hope.2

The addition of POA codes improves prediction of mortality, since they distinguish between conditions that were present at the time of admission and conditions that were acquired during the hospitalization, but it is not known if the addition of these codes is related to other measures of hospital performancesuch as differences in length of stay (LOS). Which of the factors related to the patient's clinical condition at the time of hospital admission drive differences in outcomes?

A patient's admission diagnosis may be an important piece of information that accounts for differences in hospital care. A patient's diagnosis at the time of hospital admission leads to the initial course of treatment. If the admitting diagnosis is inaccurate, a physician may spend critical time following a course of unneeded treatment until the correct diagnosis is made (reflected by a discrepancy between the admitting and discharge diagnosis codes). This discrepancy may be a marker of the fact that, while some patients are admitted to the hospital for treatment of a previously diagnosed condition, other patients require a diagnostic workup to determine the clinical problem.

A discrepancy may also reflect poor systems of documenting critical information and result in delays in care, with potentially serious health consequences.3, 4 If diagnosis discrepancy is a marker of difficult‐to‐diagnose cases, leading to delays in care, we may be able to improve our understanding of perceived differences in the production of high‐quality medical care and proactively identify cases which need more attention at admission to ensure that necessary care is provided as quickly as possible.

Almost universally, comparisons of hospital performance are risk‐adjusted to account for differences in case mix and severity across institutions. These risk‐adjustment models rely on discharge diagnoses to adjust for clinical differences among patients, even though recent research has shown that models using discharge diagnoses alone are inadequate predictors of variation in mortality among hospitals. While the findings of Pine et al.1 suggest the need to add certain clinical information, such as laboratory values, to improve these models, this information may be costly for some institutions to collect and report. We aimed to explore whether other simple to measure factors that are independent of the quality of care provided and routinely collected by hospitals' electronic information systems can be used to improve risk‐adjustment models. To assess the potential of other routinely collected diagnostic information in explaining differences in health outcomes, this study examined whether a discrepancy between the admission and discharge diagnoses was associated with hospital LOS.

Patients and Methods

Patient Population

The sample included all patients age 18 years and older who were admitted to and discharged from the general medicine units at Rush University Medical Center between July 2005 and June 2006. We further limited the sample to patients who were admitted via the emergency department (ED) or directly by their physician, excluding patients with scheduled admissions for which LOS may vary little and patients transferred from other hospitals. We also excluded patients admitted directly to the intensive care units. However, some patients were transferred to the intensive care units during their stay and we retained these patients. Only a small percent of cases fit this designation (1.2%). We did not explore the effects of this clinical situation due to small numbers of patients. Our attempt was to constitute a sample that would include patients for whom admission is more likely for an episodic and diagnostically complex set of symptoms and signs.

Diagnosis Discrepancy

Admission and discharge diagnosis codes were classified using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD‐9‐CM). An admission diagnosis is routinely documented and coded by hospitals but is not used by most private and public payers for reimbursement purposes, unlike the discharge diagnosis codes. The admission diagnosis code summarizes information known at the time the patient is admitted to the hospital and corresponds to the chief complaint in the history and physical report. Its specificity may depend on a variety of patient and physician‐related factors, and neither the quality of the information collected at admission nor the specificity of the coded information is externally regulated. Only one admission diagnosis code is captured and, like the discharge diagnosis codes, coded at the time of discharge. The admission diagnosis code reflects the amount of information known at the time of admission but is retrospectively coded.

A patient may have multiple discharge diagnosis codes. These codes summarize information collected throughout a hospitalization. The discharge diagnosis codes are used to bill third‐party payers and patients. In addition, governmental agencies, benchmarking institutions, and researchers use the discharge diagnosis codes to classify a patient's condition, identify comorbidities, and measure severity of illness.

We measured discrepancy between admission and discharge diagnoses in two ways. We first compared the admitting diagnosis code with the principal discharge diagnosis code. A match was defined as a patient record in which the two codes were exactly the same at the terminal digit. If the two codes did not match exactly at the terminal digit, we classified the patient as having a discrepancy or mismatch between diagnosis codes. For example, if the admitting diagnosis code was 786.05 (shortness of breath) and the principal discharge diagnosis code was 428 (congestive heart failure, unspecified), the diagnosis codes were classified as discrepant. To test the robustness of our definition of discrepancy between admitting and discharge diagnoses, we created a second variable that compared the admitting diagnosis code with the first five discharge diagnosis codes. If the admitting diagnosis code did not match any of the first five discharge diagnosis codes, the diagnosis codes were classified as discrepant.

We use the term diagnosis discrepancy to refer to records that have a mismatch between admitting and discharge diagnosis codes.

Models and Data Collection

The outcome of interest was inpatient LOS. The primary independent variable was whether the patient record had a discrepancy between the admitting and discharge diagnosis codes.

Our models controlled for the following variables: age; sex; admission source (ED or primary care provider); primary source of insurance (Medicare, Medicaid, or commercial coverage); and severity of illness, measured by the number of comorbid conditions.5, 6 We also controlled for the general type of clinical condition, which was classified by the principal discharge diagnosis code using the Healthcare Cost and Utilization Project's Clinical Classifications Software 2007.7 Data were collected from the institution's clinical data warehouse.

Statistical Analysis

A generalized linear regression model fit with a negative binomial distribution was used to test for an association between inpatient LOS and a discrepancy between admitting and discharge diagnosis codes, adjusting for the variables described above. We reestimated our models without the respective diagnosis discrepancy variable and calculated a likelihood ratio test statistic for the two models to determine whether the addition of diagnosis discrepancy significantly improved our models.

We used two sensitivity tests to assess the specification of our models. First, we included two interaction terms: one for diagnosis discrepancy and ED admissions, to assess whether the association between diagnosis discrepancy and LOS differed by admission source; and another for diagnosis discrepancy and the number of comorbidities, to assess whether the association between diagnosis discrepancy and LOS differed by level of patient complexity. Second, we incrementally broadened our definition of a match in admitting and discharge diagnoses by comparing the admitting diagnosis with the first two discharge diagnoses, then the first three discharge diagnoses, through the 10th discharge diagnosis, and reestimated the regression models using the successively broader definition of match (principal, first two, first three, first four, through the first 10 discharge diagnoses) to further assess the robustness of our measurement of diagnosis discrepancy as a predictor of LOS.

Results

Of the 5,375 patients discharged between July 2005 and June 2006, 75.6% had a discrepancy between their admitting and principal discharge diagnosis. Patients with a discrepancy between their admitting and principal discharge diagnosis codes had significantly longer LOS, were older, had more comorbid conditions, and were more likely to be male, admitted through the ED, and have Medicare (Table 1). Results were similar for the more encompassing definition of a discrepancy between admitting and the top 5 discharge diagnoses (results not shown).

Sample Characteristics by Presence or Absence of a Discrepancy Between Admission Diagnosis and Principal Discharge Diagnosis
VariablesnNo Discrepancy (n = 1,313)Discrepancy (n = 4,062)P*
  • NOTE: Number of patients (n) = 5,375. Numbers in parentheses are standard errors (SEs).

  • Significance.

LOS (days), mean (SE) 3.4 (3.6)4.2 (4.1)<0.001
Age (years), mean (SE) 56.3 (18.8)59.7 (18.6)<0.001
Comorbid conditions (number), mean (SE) 1.2 (1.2)1.4 (1.3)<0.001
Gender (%)   0.019
Male2,20129.870.3 
Female3,17426.173.9 
Admission source (%)    
Direct4,20229.870.3<0.001
ED1,17322.977.1 
Insurance coverage    
Medicare2,67721.678.4<0.001
Medicaid90826.373.7 
Commercial1,79027.772.3 
Clinical domain (%)   <0.001
Endocrine37022.777.3 
Nervous system23035.764.4 
Circulatory1,00819.081.1 
Respiratory48316.483.6 
Digestive85214.285.8 
Genitourinary37219.680.4 
Skin24953.047.0 
Musculoskeletal27620.379.7 
Injury/poisoning54927.972.1 
Other98634.765.3 

Table 2 reports the 10 most common admitting diagnoses that did not match the principal discharge diagnosis code and the 10 most common principal discharge diagnoses that did not match the admitting diagnosis code. The top 10 discrepant admitting diagnosis codes represented nearly one‐half of all cases with a discrepancy between the admitting and discharge diagnoses. The top 10 principal discharge diagnosis codes represented 23% of all discrepant diagnoses. Table 3 lists the 10 most common pairs of mismatched admitting and principal discharge diagnosis codes. The most common mismatched pair was a principal admitting diagnosis code of 786.05 (shortness of breath) and discharge diagnosis code of 428.0 (congestive heart failure, unspecified).

Ten Most Common Discrepant Admission and Principal Discharge Diagnosis Codes
RankAdmission Diagnosis Code Not Matching Primary Discharge DiagnosisRankPrincipal Discharge Diagnosis Code Not Matching Admission Diagnosis Code
CodeDescription%CodeDescription%
1786.05Shortness of breath11.11428.0Congestive heart failure, unspecified6.0
2789.00Abdominal pain, unspecified site8.52486Pneumonia, organism unspecified3.3
3780.6Fever6.73584.9Acute renal failure, unspecified2.2
4786.50Chest pain, unspecified5.64786.59Chest pain, other2.1
5787.01Nausea without vomiting3.95599.0Urinary tract infection, site not specified2.1
6780.99Other general symptoms3.46996.81Complications of kidney transplant1.8
7780.79Other malaise and fatigue3.07577.0Acute pancreatitis1.7
8780.2Syncope and collapse2.68996.62Infection and inflammatory reaction due to other vascular device, implant or graft1.4
9729.5Pain in limb2.19434.91Cerebral artery occlusion with cerebral infarction, unspecified1.3
10729.81Swelling of limb2.010008.8Intestinal infection, not elsewhere classified1.0
Ten Most Common Pairs of Discrepant Admission and Primary Discharge Diagnosis Codes
Admission DiagnosisPrincipal Discharge Diagnosis
CodeDescriptionCodeDescription
786.05Shortness of breath428.0Congestive heart failure, unspecified
786.50Chest pain, unspecified786.59Chest pain, other
786.05Shortness of breath486Pneumonia, organism unspecified
780.6Fever486Pneumonia, organism unspecified
780.6Fever996.62Infection and inflammatory reaction due to other vascular device, implant or graft
789.00Abdominal pain, unspecified site577.0Acute pancreatitis
780.6Fever599.0Urinary tract infection, site not specified
786.05Shortness of breath491.21Obstructive chronic bronchitis with acute exacerbation
786.05Shortness of breath415.19Pulmonary embolism and infarction, other
786.05Shortness of breath493.22Chronic obstructive asthma, with acute exacerbation

Table 4 reports the results of the generalized linear model predicting LOS. Discrepancy between the admitting and principal discharge diagnoses was associated with a 22.5% longer LOS (P < 0.01), translating into a 0.76‐day increase at the mean for those with discrepant diagnoses. Our results are robust to our definition of discrepancy between admitting and discharge diagnoses. Using the discrepancy definition based on the top five discharge diagnosis codes, a discrepancy between admitting and discharge diagnoses was associated with a 15.4% longer LOS (P < 0.01), translating into a 0.52‐day increase. Results of the likelihood ratio test showed that the addition of diagnosis discrepancy significantly improved the fit of the regression models using both the principal and top five discharge diagnosis codes.

Results for Generalized Linear Regression Model Predicting LOS (n = 5,375)
VariableCoefficient
Model 1Model 2Model 3
  • NOTE: Model 1 excludes diagnosis discrepancy variable; model 2 includes diagnosis discrepancy variable using the principal discharge diagnosis code; model 3 includes diagnosis discrepancy variable using the first 5 discharge diagnosis codes. Omitted category includes match in admitting and discharge diagnoses (in models 2 and 3), male, direct admission and commercial insurance coverage. Models control for clinical domain. Generalized linear models are estimated with a negative binomial distribution. Standard errors (SEs) are shown in parentheses.

  • Significance at the 1% level or better.

Intercept0.98* (0.06)0.84* (0.06)0.89* (0.06)
Diagnosis discrepancy with principal discharge diagnosis 0.20* (0.03) 
Diagnosis discrepancy with top 5 discharge diagnoses  0.14* (0.02)
Age0.001 (0.001)0.001 (0.001)0.001 (0.001)
Female0.03 (0.02)0.03 (0.02)0.03 (0.02)
Emergency department admission0.02 (0.03)0.03 (0.03)0.03 (0.03)
Medicare0.15* (0.03)0.15* (0.03)0.15* (0.03)
Medicaid0.04 (0.03)0.04 (0.03)0.05 (0.03)
Number of comorbid conditions0.13* (0.01)0.13* (0.01)0.13* (0.01)
Log likelihood for model11737.2311797.5411771.76
Likelihood ratio test statistic 120.62*69.06*

Broadening our definition of a match between admitting and discharge diagnosis codes from matching only on the principal discharge diagnosis code to the first 10 discharge diagnosis codes showed that even when using the first 10 discharge diagnoses, a diagnosis discrepancy still significantly increased LOS. The magnitude weakened, however, as the definition of a match in diagnosis codes was broadened, ranging from 22.5% when including the principal discharge diagnosis code only to 12.1% when including the first 10 discharge diagnosis codes (Figure 1).

Figure 1
Association between discrepancy in admission and discharge diagnoses and LOS for the first 10 discharge diagnosis codes (n = 5,375).

Discussion

Discrepancy between admitting and discharge diagnosis codes was associated with a large increase in LOS, even after controlling for age, sex, admission source, insurance, number of comorbid conditions, and clinical domain. This discrepancy translated into an increase of 0.76 days in LOS per general medicine patient, nearly two‐thirds larger than the increase in LOS of 0.47 days associated with having one comorbid condition, and equated to 4,102 additional patient days for the 5,375 general internal medicine patients admitted.

The relative and absolute increase in LOS associated with a diagnosis discrepancy is considerably larger than that associated with measures of comorbid illness found in other studies. In a study examining the predictive power of comorbidity measures based on diagnosis codes and outpatient pharmacy records, Parker et al.8 found that the inclusion of comorbid conditions based on only discharge diagnosis codes was associated with up to a 0.28‐day increase in LOS, and the further inclusion of comorbidity markers based on pharmacy data was associated with up to an additional 0.09‐day LOS. In a study comparing different measures of disease severity and comorbidities in predicting LOS for total knee replacement patients, Melfi et al.9 found that the addition of one diagnosis code was associated with a 3.3% increase in LOS. Similarly, Kieszak et al.10 found that the likelihood of having an LOS greater than 10 days increased two‐fold for patients with carotid endartectomy and at least one comorbidity.

While a discrepancy between the admitting and discharge diagnosis codes was consistently associated with an increased LOS, the underlying reasons are not yet understood. We can only speculate about the reasons for this association, and further work is needed to test these hypotheses. There are several possible explanations for discrepant cases: (1) poorer documentation at the time of admission, (2) more complexity in terms of the diagnostic task, and (3) less thorough diagnostic workup at the time of admission.

First, we do not think that poor documentation at the time of admission is the most likely explanation. Our ED uses documentation templates for all admitted patients, hence equalizing the amount of documentation for many patients. However, the main reason we do not think this is the reason for discrepancy is that diagnosis codes at the time of admission via the ED are assigned by physicians and not those who code based on documented information.

We do think that the most likely reason is that patients with discrepant diagnoses are truly harder to diagnose cases. For example, we assume that the time to provide care to patients once admitted is the same regardless of the ED or preadmission triage. For example, assume all patients are seen nearly as soon as admitted and the workup promptly ensues. Hence, under these conditions, variation in LOS may be due to more care needed for the most severely ill. If this assertion is true, our finding is a new one and adds a new candidate variable to explain variation in care due to patient severity (beyond comorbid illness, which we controlled for). We think we are showing that diagnostic uncertainty is a common, previously unexamined component of the complexity of clinical presentations (we propose that diagnosis discrepancy is a complexity variable rather than a comorbid, severity of illness variable). For example, discrepancy between admitting and discharge diagnosis codes could be due to other patient characteristics such as a patient's inability to communicate his or her symptoms to the physician due to language or cultural barriers.

However, regarding the third possible reason, if the ED or the preadmission setting fails to provide diagnostic services prior to admission for those patients with discrepant diagnoses regardless of diagnostic complexity, then our finding is a hospital or system performance variable. Those patients with discrepant diagnoses may have had a less thorough workup prior to admission leading to more workup being needed during the admission.

Regardless of the reason (perhaps all three reasons are involved at some level), our study points to a new component of patient care variations. We hope our finding spurs future research efforts. We are about to embark on a comparison of patients with identical discharge diagnoses but discrepant or not discrepant admission diagnoses to explore variations in the amount/type of diagnostic and treatment plans provided both before and during hospitalization.

In further support of diagnosis complexity as the reason for discrepancy is that the codes on admission for discrepantly coded patients are nonspecific, symptom or sign diagnoses (ie, shortness of breath, abdominal pain) while discharge diagnoses are more specific (ie, congestive heart failure, pancreatitis) (Tables 2 and 3). The nonspecific nature of the preliminary codes likely signifies more clinically complex situations and when noted, over and above previously described risk adjustment models, the discrepancy portends more healthcare needs. For patients admitted without a clear diagnosis of a clinical problem, diagnostic workups may be more complex and require longer hospitalization. For these patients, a longer LOS may not be a marker of poor quality of care, but instead the lack of critical information present at the time of admission.

Our comparison of the association between LOS and a discrepancy in diagnosis codes when the admitting diagnosis code was successively matched to a larger number of discharge diagnosis codes suggests that LOS increases not only when the admitting diagnosis is incorrect or not sufficiently specific, but also when the admitting diagnosis is correct, but not the principal discharge diagnosis. Taken together, these findings suggest that delays in care may result from lack of clear patient diagnostic information at the time of admission.

Our findings may advance the understanding of variations in hospital care from two standpoints. First, noting the discrepant diagnoses may significantly improve prediction in health services research studies examining variations in hospital performance, even beyond the addition of POA coding. Second, and perhaps more importantly, prospectively identifying patients at the time of admission with the nonspecific, preliminary codes identified in our study may allow physicians to target earlier in care patients with more demanding care needs. We realize, however, that before we could use this information to prospectively attempt to improve care, coding would have to be done at admission rather than discharge. At our site, this is true in the ED setting. Patients are assigned an admission diagnosis code as they leave the ED and this code is carried through to discharge without alteration. A nonspecific admission code could, for example, alert those taking care of the patient in the hospital that this is perhaps a more complex clinical situation requiring earlier consultation. Concurrent coding could also jumpstart studies to better understand whether what we have found in this preliminary study is due to poor assessment or difficult patient situations. However, this contingency may not be possible for those admitted directly from physician offices, as both the admission and discharge codes are determined at the time of discharge and based on documentation. Yet, on admission, a chief complaint is provided that may serve the same purpose as an admission diagnosis code if they are sufficiently in agreement.

Our study has limitations. It is from a single medical center and uses administrative data alone. We did not have access to clinical records for more detailed information about the content and completeness of medical records at the time of admission. Our observations should be tested in other hospital systems. Another limitation may be that we focus on discrepancy and not on those patients without a discrepancy. However, the aim of testing for discrepancy is to focus on improvement. Conducting a more in‐depth chart review of patients with similar final diagnoses, some with discrepant codes and others with nondiscrepant codes, may be a way to assess the reasons why LOS varied in the two groups. The next step, should our observations be confirmed, is to systematically assess whether other characteristics exist that differentiate cases in which a discrepancy between diagnosis codes is due to diagnostic uncertainty from those in which it is due to diagnostic oversight or error. A method to systematically identify conditions at admission that are likely to be misdiagnosed or have a delay in diagnosis may substantially improve the overall quality of care provided in the hospital.

Recent research has found that the addition of clinical data to administrative data strengthens the accuracy of predicting inpatient mortality.1, 2 Pine et al.1 showed that including present on admission (POA) codes and numerical laboratory data resulted in substantially better fitting risk adjustment models than those based on administrative data alone. Risk adjustment models, despite improvement with the use of POA codes, are still imperfect and severity adjustment alone does not explain differences in mortality as well as we would hope.2

The addition of POA codes improves prediction of mortality, since they distinguish between conditions that were present at the time of admission and conditions that were acquired during the hospitalization, but it is not known if the addition of these codes is related to other measures of hospital performancesuch as differences in length of stay (LOS). Which of the factors related to the patient's clinical condition at the time of hospital admission drive differences in outcomes?

A patient's admission diagnosis may be an important piece of information that accounts for differences in hospital care. A patient's diagnosis at the time of hospital admission leads to the initial course of treatment. If the admitting diagnosis is inaccurate, a physician may spend critical time following a course of unneeded treatment until the correct diagnosis is made (reflected by a discrepancy between the admitting and discharge diagnosis codes). This discrepancy may be a marker of the fact that, while some patients are admitted to the hospital for treatment of a previously diagnosed condition, other patients require a diagnostic workup to determine the clinical problem.

A discrepancy may also reflect poor systems of documenting critical information and result in delays in care, with potentially serious health consequences.3, 4 If diagnosis discrepancy is a marker of difficult‐to‐diagnose cases, leading to delays in care, we may be able to improve our understanding of perceived differences in the production of high‐quality medical care and proactively identify cases which need more attention at admission to ensure that necessary care is provided as quickly as possible.

Almost universally, comparisons of hospital performance are risk‐adjusted to account for differences in case mix and severity across institutions. These risk‐adjustment models rely on discharge diagnoses to adjust for clinical differences among patients, even though recent research has shown that models using discharge diagnoses alone are inadequate predictors of variation in mortality among hospitals. While the findings of Pine et al.1 suggest the need to add certain clinical information, such as laboratory values, to improve these models, this information may be costly for some institutions to collect and report. We aimed to explore whether other simple to measure factors that are independent of the quality of care provided and routinely collected by hospitals' electronic information systems can be used to improve risk‐adjustment models. To assess the potential of other routinely collected diagnostic information in explaining differences in health outcomes, this study examined whether a discrepancy between the admission and discharge diagnoses was associated with hospital LOS.

Patients and Methods

Patient Population

The sample included all patients age 18 years and older who were admitted to and discharged from the general medicine units at Rush University Medical Center between July 2005 and June 2006. We further limited the sample to patients who were admitted via the emergency department (ED) or directly by their physician, excluding patients with scheduled admissions for which LOS may vary little and patients transferred from other hospitals. We also excluded patients admitted directly to the intensive care units. However, some patients were transferred to the intensive care units during their stay and we retained these patients. Only a small percent of cases fit this designation (1.2%). We did not explore the effects of this clinical situation due to small numbers of patients. Our attempt was to constitute a sample that would include patients for whom admission is more likely for an episodic and diagnostically complex set of symptoms and signs.

Diagnosis Discrepancy

Admission and discharge diagnosis codes were classified using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD‐9‐CM). An admission diagnosis is routinely documented and coded by hospitals but is not used by most private and public payers for reimbursement purposes, unlike the discharge diagnosis codes. The admission diagnosis code summarizes information known at the time the patient is admitted to the hospital and corresponds to the chief complaint in the history and physical report. Its specificity may depend on a variety of patient and physician‐related factors, and neither the quality of the information collected at admission nor the specificity of the coded information is externally regulated. Only one admission diagnosis code is captured and, like the discharge diagnosis codes, coded at the time of discharge. The admission diagnosis code reflects the amount of information known at the time of admission but is retrospectively coded.

A patient may have multiple discharge diagnosis codes. These codes summarize information collected throughout a hospitalization. The discharge diagnosis codes are used to bill third‐party payers and patients. In addition, governmental agencies, benchmarking institutions, and researchers use the discharge diagnosis codes to classify a patient's condition, identify comorbidities, and measure severity of illness.

We measured discrepancy between admission and discharge diagnoses in two ways. We first compared the admitting diagnosis code with the principal discharge diagnosis code. A match was defined as a patient record in which the two codes were exactly the same at the terminal digit. If the two codes did not match exactly at the terminal digit, we classified the patient as having a discrepancy or mismatch between diagnosis codes. For example, if the admitting diagnosis code was 786.05 (shortness of breath) and the principal discharge diagnosis code was 428 (congestive heart failure, unspecified), the diagnosis codes were classified as discrepant. To test the robustness of our definition of discrepancy between admitting and discharge diagnoses, we created a second variable that compared the admitting diagnosis code with the first five discharge diagnosis codes. If the admitting diagnosis code did not match any of the first five discharge diagnosis codes, the diagnosis codes were classified as discrepant.

We use the term diagnosis discrepancy to refer to records that have a mismatch between admitting and discharge diagnosis codes.

Models and Data Collection

The outcome of interest was inpatient LOS. The primary independent variable was whether the patient record had a discrepancy between the admitting and discharge diagnosis codes.

Our models controlled for the following variables: age; sex; admission source (ED or primary care provider); primary source of insurance (Medicare, Medicaid, or commercial coverage); and severity of illness, measured by the number of comorbid conditions.5, 6 We also controlled for the general type of clinical condition, which was classified by the principal discharge diagnosis code using the Healthcare Cost and Utilization Project's Clinical Classifications Software 2007.7 Data were collected from the institution's clinical data warehouse.

Statistical Analysis

A generalized linear regression model fit with a negative binomial distribution was used to test for an association between inpatient LOS and a discrepancy between admitting and discharge diagnosis codes, adjusting for the variables described above. We reestimated our models without the respective diagnosis discrepancy variable and calculated a likelihood ratio test statistic for the two models to determine whether the addition of diagnosis discrepancy significantly improved our models.

We used two sensitivity tests to assess the specification of our models. First, we included two interaction terms: one for diagnosis discrepancy and ED admissions, to assess whether the association between diagnosis discrepancy and LOS differed by admission source; and another for diagnosis discrepancy and the number of comorbidities, to assess whether the association between diagnosis discrepancy and LOS differed by level of patient complexity. Second, we incrementally broadened our definition of a match in admitting and discharge diagnoses by comparing the admitting diagnosis with the first two discharge diagnoses, then the first three discharge diagnoses, through the 10th discharge diagnosis, and reestimated the regression models using the successively broader definition of match (principal, first two, first three, first four, through the first 10 discharge diagnoses) to further assess the robustness of our measurement of diagnosis discrepancy as a predictor of LOS.

Results

Of the 5,375 patients discharged between July 2005 and June 2006, 75.6% had a discrepancy between their admitting and principal discharge diagnosis. Patients with a discrepancy between their admitting and principal discharge diagnosis codes had significantly longer LOS, were older, had more comorbid conditions, and were more likely to be male, admitted through the ED, and have Medicare (Table 1). Results were similar for the more encompassing definition of a discrepancy between admitting and the top 5 discharge diagnoses (results not shown).

Sample Characteristics by Presence or Absence of a Discrepancy Between Admission Diagnosis and Principal Discharge Diagnosis
VariablesnNo Discrepancy (n = 1,313)Discrepancy (n = 4,062)P*
  • NOTE: Number of patients (n) = 5,375. Numbers in parentheses are standard errors (SEs).

  • Significance.

LOS (days), mean (SE) 3.4 (3.6)4.2 (4.1)<0.001
Age (years), mean (SE) 56.3 (18.8)59.7 (18.6)<0.001
Comorbid conditions (number), mean (SE) 1.2 (1.2)1.4 (1.3)<0.001
Gender (%)   0.019
Male2,20129.870.3 
Female3,17426.173.9 
Admission source (%)    
Direct4,20229.870.3<0.001
ED1,17322.977.1 
Insurance coverage    
Medicare2,67721.678.4<0.001
Medicaid90826.373.7 
Commercial1,79027.772.3 
Clinical domain (%)   <0.001
Endocrine37022.777.3 
Nervous system23035.764.4 
Circulatory1,00819.081.1 
Respiratory48316.483.6 
Digestive85214.285.8 
Genitourinary37219.680.4 
Skin24953.047.0 
Musculoskeletal27620.379.7 
Injury/poisoning54927.972.1 
Other98634.765.3 

Table 2 reports the 10 most common admitting diagnoses that did not match the principal discharge diagnosis code and the 10 most common principal discharge diagnoses that did not match the admitting diagnosis code. The top 10 discrepant admitting diagnosis codes represented nearly one‐half of all cases with a discrepancy between the admitting and discharge diagnoses. The top 10 principal discharge diagnosis codes represented 23% of all discrepant diagnoses. Table 3 lists the 10 most common pairs of mismatched admitting and principal discharge diagnosis codes. The most common mismatched pair was a principal admitting diagnosis code of 786.05 (shortness of breath) and discharge diagnosis code of 428.0 (congestive heart failure, unspecified).

Ten Most Common Discrepant Admission and Principal Discharge Diagnosis Codes
RankAdmission Diagnosis Code Not Matching Primary Discharge DiagnosisRankPrincipal Discharge Diagnosis Code Not Matching Admission Diagnosis Code
CodeDescription%CodeDescription%
1786.05Shortness of breath11.11428.0Congestive heart failure, unspecified6.0
2789.00Abdominal pain, unspecified site8.52486Pneumonia, organism unspecified3.3
3780.6Fever6.73584.9Acute renal failure, unspecified2.2
4786.50Chest pain, unspecified5.64786.59Chest pain, other2.1
5787.01Nausea without vomiting3.95599.0Urinary tract infection, site not specified2.1
6780.99Other general symptoms3.46996.81Complications of kidney transplant1.8
7780.79Other malaise and fatigue3.07577.0Acute pancreatitis1.7
8780.2Syncope and collapse2.68996.62Infection and inflammatory reaction due to other vascular device, implant or graft1.4
9729.5Pain in limb2.19434.91Cerebral artery occlusion with cerebral infarction, unspecified1.3
10729.81Swelling of limb2.010008.8Intestinal infection, not elsewhere classified1.0
Ten Most Common Pairs of Discrepant Admission and Primary Discharge Diagnosis Codes
Admission DiagnosisPrincipal Discharge Diagnosis
CodeDescriptionCodeDescription
786.05Shortness of breath428.0Congestive heart failure, unspecified
786.50Chest pain, unspecified786.59Chest pain, other
786.05Shortness of breath486Pneumonia, organism unspecified
780.6Fever486Pneumonia, organism unspecified
780.6Fever996.62Infection and inflammatory reaction due to other vascular device, implant or graft
789.00Abdominal pain, unspecified site577.0Acute pancreatitis
780.6Fever599.0Urinary tract infection, site not specified
786.05Shortness of breath491.21Obstructive chronic bronchitis with acute exacerbation
786.05Shortness of breath415.19Pulmonary embolism and infarction, other
786.05Shortness of breath493.22Chronic obstructive asthma, with acute exacerbation

Table 4 reports the results of the generalized linear model predicting LOS. Discrepancy between the admitting and principal discharge diagnoses was associated with a 22.5% longer LOS (P < 0.01), translating into a 0.76‐day increase at the mean for those with discrepant diagnoses. Our results are robust to our definition of discrepancy between admitting and discharge diagnoses. Using the discrepancy definition based on the top five discharge diagnosis codes, a discrepancy between admitting and discharge diagnoses was associated with a 15.4% longer LOS (P < 0.01), translating into a 0.52‐day increase. Results of the likelihood ratio test showed that the addition of diagnosis discrepancy significantly improved the fit of the regression models using both the principal and top five discharge diagnosis codes.

Results for Generalized Linear Regression Model Predicting LOS (n = 5,375)
VariableCoefficient
Model 1Model 2Model 3
  • NOTE: Model 1 excludes diagnosis discrepancy variable; model 2 includes diagnosis discrepancy variable using the principal discharge diagnosis code; model 3 includes diagnosis discrepancy variable using the first 5 discharge diagnosis codes. Omitted category includes match in admitting and discharge diagnoses (in models 2 and 3), male, direct admission and commercial insurance coverage. Models control for clinical domain. Generalized linear models are estimated with a negative binomial distribution. Standard errors (SEs) are shown in parentheses.

  • Significance at the 1% level or better.

Intercept0.98* (0.06)0.84* (0.06)0.89* (0.06)
Diagnosis discrepancy with principal discharge diagnosis 0.20* (0.03) 
Diagnosis discrepancy with top 5 discharge diagnoses  0.14* (0.02)
Age0.001 (0.001)0.001 (0.001)0.001 (0.001)
Female0.03 (0.02)0.03 (0.02)0.03 (0.02)
Emergency department admission0.02 (0.03)0.03 (0.03)0.03 (0.03)
Medicare0.15* (0.03)0.15* (0.03)0.15* (0.03)
Medicaid0.04 (0.03)0.04 (0.03)0.05 (0.03)
Number of comorbid conditions0.13* (0.01)0.13* (0.01)0.13* (0.01)
Log likelihood for model11737.2311797.5411771.76
Likelihood ratio test statistic 120.62*69.06*

Broadening our definition of a match between admitting and discharge diagnosis codes from matching only on the principal discharge diagnosis code to the first 10 discharge diagnosis codes showed that even when using the first 10 discharge diagnoses, a diagnosis discrepancy still significantly increased LOS. The magnitude weakened, however, as the definition of a match in diagnosis codes was broadened, ranging from 22.5% when including the principal discharge diagnosis code only to 12.1% when including the first 10 discharge diagnosis codes (Figure 1).

Figure 1
Association between discrepancy in admission and discharge diagnoses and LOS for the first 10 discharge diagnosis codes (n = 5,375).

Discussion

Discrepancy between admitting and discharge diagnosis codes was associated with a large increase in LOS, even after controlling for age, sex, admission source, insurance, number of comorbid conditions, and clinical domain. This discrepancy translated into an increase of 0.76 days in LOS per general medicine patient, nearly two‐thirds larger than the increase in LOS of 0.47 days associated with having one comorbid condition, and equated to 4,102 additional patient days for the 5,375 general internal medicine patients admitted.

The relative and absolute increase in LOS associated with a diagnosis discrepancy is considerably larger than that associated with measures of comorbid illness found in other studies. In a study examining the predictive power of comorbidity measures based on diagnosis codes and outpatient pharmacy records, Parker et al.8 found that the inclusion of comorbid conditions based on only discharge diagnosis codes was associated with up to a 0.28‐day increase in LOS, and the further inclusion of comorbidity markers based on pharmacy data was associated with up to an additional 0.09‐day LOS. In a study comparing different measures of disease severity and comorbidities in predicting LOS for total knee replacement patients, Melfi et al.9 found that the addition of one diagnosis code was associated with a 3.3% increase in LOS. Similarly, Kieszak et al.10 found that the likelihood of having an LOS greater than 10 days increased two‐fold for patients with carotid endartectomy and at least one comorbidity.

While a discrepancy between the admitting and discharge diagnosis codes was consistently associated with an increased LOS, the underlying reasons are not yet understood. We can only speculate about the reasons for this association, and further work is needed to test these hypotheses. There are several possible explanations for discrepant cases: (1) poorer documentation at the time of admission, (2) more complexity in terms of the diagnostic task, and (3) less thorough diagnostic workup at the time of admission.

First, we do not think that poor documentation at the time of admission is the most likely explanation. Our ED uses documentation templates for all admitted patients, hence equalizing the amount of documentation for many patients. However, the main reason we do not think this is the reason for discrepancy is that diagnosis codes at the time of admission via the ED are assigned by physicians and not those who code based on documented information.

We do think that the most likely reason is that patients with discrepant diagnoses are truly harder to diagnose cases. For example, we assume that the time to provide care to patients once admitted is the same regardless of the ED or preadmission triage. For example, assume all patients are seen nearly as soon as admitted and the workup promptly ensues. Hence, under these conditions, variation in LOS may be due to more care needed for the most severely ill. If this assertion is true, our finding is a new one and adds a new candidate variable to explain variation in care due to patient severity (beyond comorbid illness, which we controlled for). We think we are showing that diagnostic uncertainty is a common, previously unexamined component of the complexity of clinical presentations (we propose that diagnosis discrepancy is a complexity variable rather than a comorbid, severity of illness variable). For example, discrepancy between admitting and discharge diagnosis codes could be due to other patient characteristics such as a patient's inability to communicate his or her symptoms to the physician due to language or cultural barriers.

However, regarding the third possible reason, if the ED or the preadmission setting fails to provide diagnostic services prior to admission for those patients with discrepant diagnoses regardless of diagnostic complexity, then our finding is a hospital or system performance variable. Those patients with discrepant diagnoses may have had a less thorough workup prior to admission leading to more workup being needed during the admission.

Regardless of the reason (perhaps all three reasons are involved at some level), our study points to a new component of patient care variations. We hope our finding spurs future research efforts. We are about to embark on a comparison of patients with identical discharge diagnoses but discrepant or not discrepant admission diagnoses to explore variations in the amount/type of diagnostic and treatment plans provided both before and during hospitalization.

In further support of diagnosis complexity as the reason for discrepancy is that the codes on admission for discrepantly coded patients are nonspecific, symptom or sign diagnoses (ie, shortness of breath, abdominal pain) while discharge diagnoses are more specific (ie, congestive heart failure, pancreatitis) (Tables 2 and 3). The nonspecific nature of the preliminary codes likely signifies more clinically complex situations and when noted, over and above previously described risk adjustment models, the discrepancy portends more healthcare needs. For patients admitted without a clear diagnosis of a clinical problem, diagnostic workups may be more complex and require longer hospitalization. For these patients, a longer LOS may not be a marker of poor quality of care, but instead the lack of critical information present at the time of admission.

Our comparison of the association between LOS and a discrepancy in diagnosis codes when the admitting diagnosis code was successively matched to a larger number of discharge diagnosis codes suggests that LOS increases not only when the admitting diagnosis is incorrect or not sufficiently specific, but also when the admitting diagnosis is correct, but not the principal discharge diagnosis. Taken together, these findings suggest that delays in care may result from lack of clear patient diagnostic information at the time of admission.

Our findings may advance the understanding of variations in hospital care from two standpoints. First, noting the discrepant diagnoses may significantly improve prediction in health services research studies examining variations in hospital performance, even beyond the addition of POA coding. Second, and perhaps more importantly, prospectively identifying patients at the time of admission with the nonspecific, preliminary codes identified in our study may allow physicians to target earlier in care patients with more demanding care needs. We realize, however, that before we could use this information to prospectively attempt to improve care, coding would have to be done at admission rather than discharge. At our site, this is true in the ED setting. Patients are assigned an admission diagnosis code as they leave the ED and this code is carried through to discharge without alteration. A nonspecific admission code could, for example, alert those taking care of the patient in the hospital that this is perhaps a more complex clinical situation requiring earlier consultation. Concurrent coding could also jumpstart studies to better understand whether what we have found in this preliminary study is due to poor assessment or difficult patient situations. However, this contingency may not be possible for those admitted directly from physician offices, as both the admission and discharge codes are determined at the time of discharge and based on documentation. Yet, on admission, a chief complaint is provided that may serve the same purpose as an admission diagnosis code if they are sufficiently in agreement.

Our study has limitations. It is from a single medical center and uses administrative data alone. We did not have access to clinical records for more detailed information about the content and completeness of medical records at the time of admission. Our observations should be tested in other hospital systems. Another limitation may be that we focus on discrepancy and not on those patients without a discrepancy. However, the aim of testing for discrepancy is to focus on improvement. Conducting a more in‐depth chart review of patients with similar final diagnoses, some with discrepant codes and others with nondiscrepant codes, may be a way to assess the reasons why LOS varied in the two groups. The next step, should our observations be confirmed, is to systematically assess whether other characteristics exist that differentiate cases in which a discrepancy between diagnosis codes is due to diagnostic uncertainty from those in which it is due to diagnostic oversight or error. A method to systematically identify conditions at admission that are likely to be misdiagnosed or have a delay in diagnosis may substantially improve the overall quality of care provided in the hospital.

References
  1. Pine M,Jordan HS,Elixhauser A, et al.Enhancement of claims data to improve risk adjustment of hospital mortality.JAMA.2007;297:7176.
  2. Iezzoni LI.The risks of risk adjustment.JAMA.1997;278:16001607.
  3. Jencks SF,Huff ED,Cuerdon T.Change in the quality of care delivered to Medicare beneficiaries, 1998–1999 to 2000–2001.JAMA.2003;289:305312.
  4. Graff LG,Wang Y,Borkowski B, et al.Delay in the diagnosis of acute myocardial infarction: Effect on quality of care and its assessment.Acad Emerg Med.2006;13:931938.
  5. Rochon PA,Katz JN,Morrow LA, et al.Comorbid illness is associated with survival and length of hospital stay in patients with chronic disability. A prospective comparison of three comorbidity indices.Med Care.1996;34:10931101.
  6. Iezzoni LI,Foley SM,Daley J,Hughes J,Fisher ES,Heeren T.Comorbidities, complications and coding bias: does the number of diagnosis codes matter in predicting in‐hospital mortality?JAMA.1992;267:21972203.
  7. Elixhauser A,Steiner C,Palmer L. Clinical classifications software (CCS), 2007. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December2008.
  8. Parker JP,McCombs JS,Graddy EA.Can pharmacy data improve prediction of hospital outcomes? Comparison with a diagnosis‐based comorbidity measure.Med Care.2003;41:407419.
  9. Melfi C,Holleman E,Arthur D,Katz B.Selecting a patient characteristics index for the prediction of medical outcomes using administrative data.J Clin Epidemiol.1995;48:917926.
  10. Kieszak SM,Flanders WD,Kosinski AS,Shipp CC,Karp H.A comparison of the Charlson Comorbidity Index derived from medical record data and administrative billing data.J Clin Epidemiol.1999;52:137142.
References
  1. Pine M,Jordan HS,Elixhauser A, et al.Enhancement of claims data to improve risk adjustment of hospital mortality.JAMA.2007;297:7176.
  2. Iezzoni LI.The risks of risk adjustment.JAMA.1997;278:16001607.
  3. Jencks SF,Huff ED,Cuerdon T.Change in the quality of care delivered to Medicare beneficiaries, 1998–1999 to 2000–2001.JAMA.2003;289:305312.
  4. Graff LG,Wang Y,Borkowski B, et al.Delay in the diagnosis of acute myocardial infarction: Effect on quality of care and its assessment.Acad Emerg Med.2006;13:931938.
  5. Rochon PA,Katz JN,Morrow LA, et al.Comorbid illness is associated with survival and length of hospital stay in patients with chronic disability. A prospective comparison of three comorbidity indices.Med Care.1996;34:10931101.
  6. Iezzoni LI,Foley SM,Daley J,Hughes J,Fisher ES,Heeren T.Comorbidities, complications and coding bias: does the number of diagnosis codes matter in predicting in‐hospital mortality?JAMA.1992;267:21972203.
  7. Elixhauser A,Steiner C,Palmer L. Clinical classifications software (CCS), 2007. Available at: http://www.hcup‐us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December2008.
  8. Parker JP,McCombs JS,Graddy EA.Can pharmacy data improve prediction of hospital outcomes? Comparison with a diagnosis‐based comorbidity measure.Med Care.2003;41:407419.
  9. Melfi C,Holleman E,Arthur D,Katz B.Selecting a patient characteristics index for the prediction of medical outcomes using administrative data.J Clin Epidemiol.1995;48:917926.
  10. Kieszak SM,Flanders WD,Kosinski AS,Shipp CC,Karp H.A comparison of the Charlson Comorbidity Index derived from medical record data and administrative billing data.J Clin Epidemiol.1999;52:137142.
Issue
Journal of Hospital Medicine - 4(4)
Issue
Journal of Hospital Medicine - 4(4)
Page Number
234-239
Page Number
234-239
Article Type
Display Headline
Discrepancy between admission and discharge diagnoses as a predictor of hospital length of stay
Display Headline
Discrepancy between admission and discharge diagnoses as a predictor of hospital length of stay
Legacy Keywords
administrative data, diagnosis codes, diagnosis discordance, diagnostic uncertainty, health services research, hospital care, length of stay
Legacy Keywords
administrative data, diagnosis codes, diagnosis discordance, diagnostic uncertainty, health services research, hospital care, length of stay
Sections
Article Source

Copyright © 2009 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Rush University, Department of Health Systems Management, 1700 West Van Buren Street, TOB Suite 126B, Chicago, IL 60612
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media