User login
The TEND (Tomorrow’s Expected Number of Discharges) Model Accurately Predicted the Number of Patients Who Were Discharged from the Hospital the Next Day
Hospitals typically allocate beds based on historical patient volumes. If funding decreases, hospitals will usually try to maximize resource utilization by allocating beds to attain occupancies close to 100% for significant periods of time. This will invariably cause days in which hospital occupancy exceeds capacity, at which time critical entry points (such as the emergency department and operating room) will become blocked. This creates significant concerns over the patient quality of care.
Hospital administrators have very few options when hospital occupancy exceeds 100%. They could postpone admissions for “planned” cases, bring in additional staff to increase capacity, or instigate additional methods to increase hospital discharges such as expanding care resources in the community. All options are costly, bothersome, or cannot be actioned immediately. The need for these options could be minimized by enabling hospital administrators to make more informed decisions regarding hospital bed management by knowing the likely number of discharges in the next 24 hours.
Predicting the number of people who will be discharged in the next day can be approached in several ways. One approach would be to calculate each patient’s expected length of stay and then use the variation around that estimate to calculate each day’s discharge probability. Several studies have attempted to model hospital length of stay using a broad assortment of methodologies, but a mechanism to accurately predict this outcome has been elusive1,2 (with Verburg et al.3 concluding in their study’s abstract that “…it is difficult to predict length of stay…”). A second approach would be to use survival analysis methods to generate each patient’s hazard of discharge over time, which could be directly converted to an expected daily risk of discharge. However, this approach is complicated by the concurrent need to include time-dependent covariates and consider the competing risk of death in hospital, which can complicate survival modeling.4,5 A third approach would be the implementation of a longitudinal analysis using marginal models to predict the daily probability of discharge,6 but this method quickly overwhelms computer resources when large datasets are present.
In this study, we decided to use nonparametric models to predict the daily number of hospital discharges. We first identified patient groups with distinct discharge patterns. We then calculated the conditional daily discharge probability of patients in each of these groups. Finally, these conditional daily discharge probabilities were then summed for each hospital day to generate the expected number of discharges in the next 24 hours. This paper details the methods we used to create our model and the accuracy of its predictions.
METHODS
Study Setting and Databases Used for Analysis
The study took place at The Ottawa Hospital, a 1000-bed teaching hospital with 3 campuses that is the primary referral center in our region. The study was approved by our local research ethics board.
The Patient Registry Database records the date and time of admission for each patient (defined as the moment that a patient’s admission request is registered in the patient registration) and discharge (defined as the time when the patient’s discharge from hospital was entered into the patient registration) for hospital encounters. Emergency department encounters were also identified in the Patient Registry Database along with admission service, patient age and sex, and patient location throughout the admission. The Laboratory Database records all laboratory studies and results on all patients at the hospital.
Study Cohort
We used the Patient Registry Database to identify all people aged 1 year or more who were admitted to the hospital between January 1, 2013, and December 31, 2015. This time frame was selected to (i) ensure that data were complete; and (ii) complete calendar years of data were available for both derivation (patient-days in 2013-2014) and validation (2015) cohorts. Patients who were observed in the emergency room without admission to hospital were not included.
Study Outcome
The study outcome was the number of patients discharged from the hospital each day. For the analysis, the reference point for each day was 1 second past midnight; therefore, values for time-dependent covariates up to and including midnight were used to predict the number of discharges in the next 24 hours.
Study Covariates
Baseline (ie, time-independent) covariates included patient age and sex, admission service, hospital campus, whether or not the patient was admitted from the emergency department (all determined from the Patient Registry Database), and the Laboratory-based Acute Physiological Score (LAPS). The latter, which was calculated with the Laboratory Database using results for 14 tests (arterial pH, PaCO2, PaO2, anion gap, hematocrit, total white blood cell count, serum albumin, total bilirubin, creatinine, urea nitrogen, glucose, sodium, bicarbonate, and troponin I) measured in the 24-hour time frame preceding hospitalization, was derived by Escobar and colleagues7 to measure severity of illness and was subsequently validated in our hospital.8 The independent association of each laboratory perturbation with risk of death in hospital is reflected by the number of points assigned to each lab value with the total LAPS being the sum of these values. Time-dependent covariates included weekday in hospital and whether or not patients were in the intensive care unit.
Analysis
We used 3 stages to create a model to predict the daily expected number of discharges: we identified discharge risk strata containing patients having similar discharge patterns using data from patients in the derivation cohort (first stage); then, we generated the preliminary probability of discharge by determining the daily discharge probability in each discharge risk strata (second stage); finally, we modified the probability from the second stage based on the weekday and admission service and summed these probabilities to create the expected number of discharges on a particular date (third stage).
The first stage identified discharge risk strata based on the covariates listed above. This was determined by using a survival tree approach9 with proportional hazard regression models to generate the “splits.” These models were offered all covariates listed in the Study Covariates section. Admission service was clustered within 4 departments (obstetrics/gynecology, psychiatry, surgery, and medicine) and day of week was “binarized” into weekday/weekend-holiday (because the use of categorical variables with large numbers of groups can “stunt” regression trees due to small numbers of patients—and, therefore, statistical power—in each subgroup). The proportional hazards model identified the covariate having the strongest association with time to discharge (based on the Wald X2 value divided by the degrees of freedom). This variable was then used to split the cohort into subgroups (with continuous covariates being categorized into quartiles). The proportional hazards model was then repeated in each subgroup (with the previous splitting variable[s] excluded from the model). This process continued until no variable was associated with time to discharge with a P value less than .0001. This survival-tree was then used to cluster all patients into distinct discharge risk strata.
In the second stage, we generated the preliminary probability of discharge for a specific date. This was calculated by assigning all patients in hospital to their discharge risk strata (Appendix). We then measured the probability of discharge on each hospitalization day in all discharge risk strata using data from the previous 180 days (we only used the prior 180 days of data to account for temporal changes in hospital discharge patterns). For example, consider a 75-year-old patient on her third hospital day under obstetrics/gynecology on December 19, 2015 (a Saturday). This patient would be assigned to risk stratum #133 (Appendix A). We then measured the probability of discharge of all patients in this discharge risk stratum hospitalized in the previous 6 months (ie, between June 22, 2015, and December 18, 2015) on each hospital day. For risk stratum #133, the probability of discharge on hospital day 3 was 0.1111; therefore, our sample patient’s preliminary expected discharge probability was 0.1111.
To attain stable daily discharge probability estimates, a minimum of 50 patients per discharge risk stratum-hospitalization day combination was required. If there were less than 50 patients for a particular hospitalization day in a particular discharge risk stratum, we grouped hospitalization days in that risk stratum together until the minimum of 50 patients was collected.
The third (and final) stage accounted for the lack of granularity when we created the discharge risk strata in the first stage. As we mentioned above, admission service was clustered into 4 departments and the day of week was clustered into weekend/weekday. However, important variations in discharge probabilities could still exist within departments and between particular days of the week.10 Therefore, we created a correction factor to adjust the preliminary expected number of discharges based on the admission division and day of week. This correction factor used data from the 180 days prior to the analysis date within which the expected daily number of discharges was calculated (using the methods above). The correction factor was the relative difference between the observed and expected number of discharges within each division-day of week grouping.
For example, to calculate the correction factor for our sample patient presented above (75-year-old patient on hospital day 3 under gynecology on Saturday, December 19, 2015), we measured the observed number of discharges from gynecology on Saturdays between June 22, 2015, and December 18, 2015, (n = 206) and the expected number of discharges (n = 195.255) resulting in a correction factor of (observed-expected)/expected = (195.255-206)/195.206 = 0.05503. Therefore, the final expected discharge probability for our sample patient was 0.1111+0.1111*0.05503=0.1172. The expected number of discharges on a particular date was the preliminary expected number of discharges on that date (generated in the second stage) multiplied by the correction factor for the corresponding division-day or week group.
RESULTS
There were 192,859 admissions involving patients more than 1 year of age that spent at least part of their hospitalization between January 1, 2013, and December 31, 2015 (Table). Patients were middle-aged and slightly female predominant, with about half being admitted from the emergency department. Approximately 80% of admissions were to surgical or medical services. More than 95% of admissions ended with a discharge from the hospital with the remainder ending in a death. Almost 30% of hospitalization days occurred on weekends or holidays. Hospitalizations in the derivation (2013-2014) and validation (2015) group were essentially the same, except there was a slight drop in hospital length of stay (from a median of 4 days to 3 days) between the 2 periods.
Patient and hospital covariates importantly influenced the daily conditional probability of discharge (Figure 1). Patients admitted to the obstetrics/gynecology department were notably more likely to be discharged from hospital with no influence from the day of week. In contrast, the probability of discharge decreased notably on the weekends in the other departments. Patients on the ward were much more likely to be discharged than those in the intensive care unit, with increasing age associated with a decreased discharge likelihood in the former but not the latter patients. Finally, discharge probabilities varied only slightly between campuses at our hospital with discharge risk decreasing as severity of illness (as measured by LAPS) increased.
The TEND model contained 142 discharge risk strata (Appendix A). Weekend-holiday status had the strongest association with discharge probability (ie, it was the first splitting variable). The most complex discharge risk strata contained 6 covariates. The daily conditional probability of discharge during the first 2 weeks of hospitalization varied extensively between discharge risk strata (Figure 2). Overall, the conditional discharge probability increased from the first to the second day, remained relatively stable for several days, and then slowly decreased over time. However, this pattern and day-to-day variability differed extensively between risk strata.
The observed daily number of discharges in the validation cohort varied extensively (median 139; interquartile range [IQR] 95-160; range 39-214). The TEND model accurately predicted the daily number of discharges with the expected daily number being strongly associated with the observed number (adjusted R2 = 89.2%; P < 0.0001; Figure 3). Calibration decreased but remained significant when we limited the analyses by hospital campus (General: R2 = 46.3%; P < 0.0001; Civic: R2 = 47.9%; P < 0.0001; Heart Institute: R2 = 18.1%; P < 0.0001). The expected number of daily discharges was an unbiased estimator of the observed number of discharges (its parameter estimate in a linear regression model with the observed number of discharges as the outcome variable was 1.0005; 95% confidence interval, 0.9647-1.0363). The absolute difference in the observed and expected daily number of discharges was small (median 1.6; IQR −6.8 to 9.4; range −37 to 63.4) as was the relative difference (median 1.4%; IQR −5.5% to 7.1%; range −40.9% to 43.4%). The expected number of discharges was within 20% of the observed number of discharges in 95.1% of days in 2015.
DISCUSSION
Knowing how many patients will soon be discharged from the hospital should greatly facilitate hospital planning. This study showed that the TEND model used simple patient and hospitalization covariates to accurately predict the number of patients who will be discharged from hospital in the next day.
We believe that this study has several notable findings. First, we think that using a nonparametric approach to predicting the daily number of discharges importantly increased accuracy. This approach allowed us to generate expected likelihoods based on actual discharge probabilities at our hospital in the most recent 6 months of hospitalization-days within patients having discharge patterns that were very similar to the patient in question (ie, discharge risk strata, Appendix A). This ensured that trends in hospitalization habits were accounted for without the need of a period variable in our model. In addition, the lack of parameters in the model will make it easier to transplant it to other hospitals. Second, we think that the accuracy of the predictions were remarkable given the relative “crudeness” of our predictors. By using relatively simple factors, the TEND model was able to output accurate predictions for the number of daily discharges (Figure 3).
This study joins several others that have attempted to accomplish the difficult task of predicting the number of hospital discharges by using digitized data. Barnes et al.11 created a model using regression random forest methods in a single medical service within a hospital to predict the daily number of discharges with impressive accuracy (mean daily number of discharges observed 8.29, expected 8.51). Interestingly, the model in this study was more accurate at predicting discharge likelihood than physicians. Levin et al.12 derived a model using discrete time logistic regression to predict the likelihood of discharge from a pediatric intensive care unit, finding that physician orders (captured via electronic order entry) could be categorized and used to significantly increase the accuracy of discharge likelihood. This study demonstrates the potential opportunities within health-related data from hospital data warehouses to improve prediction. We believe that continued work in this field will result in the increased use of digital data to help hospital administrators manage patient beds more efficiently and effectively than currently used resource intensive manual methods.13,14
Several issues should be kept in mind when interpreting our findings. First, our analysis is limited to a single institution in Canada. It will be important to determine if the TEND model methodology generalizes to other hospitals in different jurisdictions. Such an external validation, especially in multiple hospitals, will be important to show that the TEND model methodology works in other facilities. Hospitals could implement the TEND model if they are able to record daily values for each of the variables required to assign patients to a discharge risk stratum (Appendix A) and calculate within each the daily probability of discharge. Hospitals could derive their own discharge risk strata to account for covariates, which we did not include in our study but could be influential, such as insurance status. These discharge risk estimates could also be incorporated into the electronic medical record or hospital dashboards (as long as the data required to generate the estimates are available). These interventions would permit the expected number of hospital discharges (and even the patient-level probability of discharge) to be calculated on a daily basis. Second, 2 potential biases could have influenced the identification of our discharge risk strata (Appendix A). In this process, we used survival tree methods to separate patient-days into clusters having progressively more homogenous discharge patterns. Each split was determined by using a proportional hazards model that ignored the competing risks of death in hospital. In addition, the model expressed age and LAPS as continuous variables, whereas these covariates had to be categorized to create our risk strata groupings. The strength of a covariate’s association with an outcome will decrease when a continuous variable is categorized.15 Both of these issues might have biased our final risk strata categorization (Appendix A). Third, we limited our model to include simple covariates whose values could be determined relatively easily within most hospital administrative data systems. While this increases the generalizability to other hospital information systems, we believe that the introduction of other covariates to the model—such as daily vital signs, laboratory results, medications, or time from operations—could increase prediction accuracy. Finally, it is uncertain whether or not knowing the predicted number of discharges will improve the efficiency of bed management within the hospital. It seems logical that an accurate prediction of the number of beds that will be made available in the next day should improve decisions regarding the number of patients who could be admitted electively to the hospital. It remains to be seen, however, whether this truly happens.
In summary, we found that the TEND model used a handful of patient and hospitalization factors to accurately predict the expected number of discharges from hospital in the next day. Further work is required to implement this model into our institution’s data warehouse and then determine whether this prediction will improve the efficiency of bed management at our hospital.
Disclosure: CvW is supported by a University of Ottawa Department of Medicine Clinician Scientist Chair. The authors have no conflicts of interest
1. Austin PC, Rothwell DM, Tu JV. A comparison of statistical modeling strategies for analyzing length of stay after CABG surgery. Health Serv Outcomes Res Methodol. 2002;3:107-133.
2. Moran JL, Solomon PJ. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand intensive care adult patient data-base, 2008-2009. BMC Med Res Methodol. 2012;12:68. PubMed
3. Verburg IWM, de Keizer NF, de Jonge E, Peek N. Comparison of regression methods for modeling intensive care length of stay. PLoS One. 2014;9:e109684. PubMed
4. Beyersmann J, Schumacher M. Time-dependent covariates in the proportional subdistribution hazards model for competing risks. Biostatistics. 2008;9:765-776. PubMed
5. Latouche A, Porcher R, Chevret S. A note on including time-dependent covariate in regression model for competing risks data. Biom J. 2005;47:807-814. PubMed
6. Fitzmaurice GM, Laird NM, Ware JH. Marginal models: generalized estimating equations. Applied Longitudinal Analysis. 2nd ed. John Wiley & Sons; 2011;353-394.
7. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232-239. PubMed
8. van Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798-803. PubMed
9. Bou-Hamad I, Larocque D, Ben-Ameur H. A review of survival trees. Statist Surv. 2011;44-71.
10. van Walraven C, Bell CM. Risk of death or readmission among people discharged from hospital on Fridays. CMAJ. 2002;166:1672-1673. PubMed
11. Barnes S, Hamrock E, Toerper M, Siddiqui S, Levin S. Real-time prediction of inpatient length of stay for discharge prioritization. J Am Med Inform Assoc. 2016;23:e2-e10. PubMed
12. Levin SRP, Harley ETB, Fackler JCM, et al. Real-time forecasting of pediatric intensive care unit length of stay using computerized provider orders. Crit Care Med. 2012;40:3058-3064. PubMed
13. Resar R, Nolan K, Kaczynski D, Jensen K. Using real-time demand capacity management to improve hospitalwide patient flow. Jt Comm J Qual Patient Saf. 2011;37:217-227. PubMed
14. de Grood A, Blades K, Pendharkar SR. A review of discharge prediction processes in acute care hospitals. Healthc Policy. 2016;12:105-115. PubMed
15. van Walraven C, Hart RG. Leave ‘em alone - why continuous variables should be analyzed as such. Neuroepidemiology 2008;30:138-139. PubMed
Hospitals typically allocate beds based on historical patient volumes. If funding decreases, hospitals will usually try to maximize resource utilization by allocating beds to attain occupancies close to 100% for significant periods of time. This will invariably cause days in which hospital occupancy exceeds capacity, at which time critical entry points (such as the emergency department and operating room) will become blocked. This creates significant concerns over the patient quality of care.
Hospital administrators have very few options when hospital occupancy exceeds 100%. They could postpone admissions for “planned” cases, bring in additional staff to increase capacity, or instigate additional methods to increase hospital discharges such as expanding care resources in the community. All options are costly, bothersome, or cannot be actioned immediately. The need for these options could be minimized by enabling hospital administrators to make more informed decisions regarding hospital bed management by knowing the likely number of discharges in the next 24 hours.
Predicting the number of people who will be discharged in the next day can be approached in several ways. One approach would be to calculate each patient’s expected length of stay and then use the variation around that estimate to calculate each day’s discharge probability. Several studies have attempted to model hospital length of stay using a broad assortment of methodologies, but a mechanism to accurately predict this outcome has been elusive1,2 (with Verburg et al.3 concluding in their study’s abstract that “…it is difficult to predict length of stay…”). A second approach would be to use survival analysis methods to generate each patient’s hazard of discharge over time, which could be directly converted to an expected daily risk of discharge. However, this approach is complicated by the concurrent need to include time-dependent covariates and consider the competing risk of death in hospital, which can complicate survival modeling.4,5 A third approach would be the implementation of a longitudinal analysis using marginal models to predict the daily probability of discharge,6 but this method quickly overwhelms computer resources when large datasets are present.
In this study, we decided to use nonparametric models to predict the daily number of hospital discharges. We first identified patient groups with distinct discharge patterns. We then calculated the conditional daily discharge probability of patients in each of these groups. Finally, these conditional daily discharge probabilities were then summed for each hospital day to generate the expected number of discharges in the next 24 hours. This paper details the methods we used to create our model and the accuracy of its predictions.
METHODS
Study Setting and Databases Used for Analysis
The study took place at The Ottawa Hospital, a 1000-bed teaching hospital with 3 campuses that is the primary referral center in our region. The study was approved by our local research ethics board.
The Patient Registry Database records the date and time of admission for each patient (defined as the moment that a patient’s admission request is registered in the patient registration) and discharge (defined as the time when the patient’s discharge from hospital was entered into the patient registration) for hospital encounters. Emergency department encounters were also identified in the Patient Registry Database along with admission service, patient age and sex, and patient location throughout the admission. The Laboratory Database records all laboratory studies and results on all patients at the hospital.
Study Cohort
We used the Patient Registry Database to identify all people aged 1 year or more who were admitted to the hospital between January 1, 2013, and December 31, 2015. This time frame was selected to (i) ensure that data were complete; and (ii) complete calendar years of data were available for both derivation (patient-days in 2013-2014) and validation (2015) cohorts. Patients who were observed in the emergency room without admission to hospital were not included.
Study Outcome
The study outcome was the number of patients discharged from the hospital each day. For the analysis, the reference point for each day was 1 second past midnight; therefore, values for time-dependent covariates up to and including midnight were used to predict the number of discharges in the next 24 hours.
Study Covariates
Baseline (ie, time-independent) covariates included patient age and sex, admission service, hospital campus, whether or not the patient was admitted from the emergency department (all determined from the Patient Registry Database), and the Laboratory-based Acute Physiological Score (LAPS). The latter, which was calculated with the Laboratory Database using results for 14 tests (arterial pH, PaCO2, PaO2, anion gap, hematocrit, total white blood cell count, serum albumin, total bilirubin, creatinine, urea nitrogen, glucose, sodium, bicarbonate, and troponin I) measured in the 24-hour time frame preceding hospitalization, was derived by Escobar and colleagues7 to measure severity of illness and was subsequently validated in our hospital.8 The independent association of each laboratory perturbation with risk of death in hospital is reflected by the number of points assigned to each lab value with the total LAPS being the sum of these values. Time-dependent covariates included weekday in hospital and whether or not patients were in the intensive care unit.
Analysis
We used 3 stages to create a model to predict the daily expected number of discharges: we identified discharge risk strata containing patients having similar discharge patterns using data from patients in the derivation cohort (first stage); then, we generated the preliminary probability of discharge by determining the daily discharge probability in each discharge risk strata (second stage); finally, we modified the probability from the second stage based on the weekday and admission service and summed these probabilities to create the expected number of discharges on a particular date (third stage).
The first stage identified discharge risk strata based on the covariates listed above. This was determined by using a survival tree approach9 with proportional hazard regression models to generate the “splits.” These models were offered all covariates listed in the Study Covariates section. Admission service was clustered within 4 departments (obstetrics/gynecology, psychiatry, surgery, and medicine) and day of week was “binarized” into weekday/weekend-holiday (because the use of categorical variables with large numbers of groups can “stunt” regression trees due to small numbers of patients—and, therefore, statistical power—in each subgroup). The proportional hazards model identified the covariate having the strongest association with time to discharge (based on the Wald X2 value divided by the degrees of freedom). This variable was then used to split the cohort into subgroups (with continuous covariates being categorized into quartiles). The proportional hazards model was then repeated in each subgroup (with the previous splitting variable[s] excluded from the model). This process continued until no variable was associated with time to discharge with a P value less than .0001. This survival-tree was then used to cluster all patients into distinct discharge risk strata.
In the second stage, we generated the preliminary probability of discharge for a specific date. This was calculated by assigning all patients in hospital to their discharge risk strata (Appendix). We then measured the probability of discharge on each hospitalization day in all discharge risk strata using data from the previous 180 days (we only used the prior 180 days of data to account for temporal changes in hospital discharge patterns). For example, consider a 75-year-old patient on her third hospital day under obstetrics/gynecology on December 19, 2015 (a Saturday). This patient would be assigned to risk stratum #133 (Appendix A). We then measured the probability of discharge of all patients in this discharge risk stratum hospitalized in the previous 6 months (ie, between June 22, 2015, and December 18, 2015) on each hospital day. For risk stratum #133, the probability of discharge on hospital day 3 was 0.1111; therefore, our sample patient’s preliminary expected discharge probability was 0.1111.
To attain stable daily discharge probability estimates, a minimum of 50 patients per discharge risk stratum-hospitalization day combination was required. If there were less than 50 patients for a particular hospitalization day in a particular discharge risk stratum, we grouped hospitalization days in that risk stratum together until the minimum of 50 patients was collected.
The third (and final) stage accounted for the lack of granularity when we created the discharge risk strata in the first stage. As we mentioned above, admission service was clustered into 4 departments and the day of week was clustered into weekend/weekday. However, important variations in discharge probabilities could still exist within departments and between particular days of the week.10 Therefore, we created a correction factor to adjust the preliminary expected number of discharges based on the admission division and day of week. This correction factor used data from the 180 days prior to the analysis date within which the expected daily number of discharges was calculated (using the methods above). The correction factor was the relative difference between the observed and expected number of discharges within each division-day of week grouping.
For example, to calculate the correction factor for our sample patient presented above (75-year-old patient on hospital day 3 under gynecology on Saturday, December 19, 2015), we measured the observed number of discharges from gynecology on Saturdays between June 22, 2015, and December 18, 2015, (n = 206) and the expected number of discharges (n = 195.255) resulting in a correction factor of (observed-expected)/expected = (195.255-206)/195.206 = 0.05503. Therefore, the final expected discharge probability for our sample patient was 0.1111+0.1111*0.05503=0.1172. The expected number of discharges on a particular date was the preliminary expected number of discharges on that date (generated in the second stage) multiplied by the correction factor for the corresponding division-day or week group.
RESULTS
There were 192,859 admissions involving patients more than 1 year of age that spent at least part of their hospitalization between January 1, 2013, and December 31, 2015 (Table). Patients were middle-aged and slightly female predominant, with about half being admitted from the emergency department. Approximately 80% of admissions were to surgical or medical services. More than 95% of admissions ended with a discharge from the hospital with the remainder ending in a death. Almost 30% of hospitalization days occurred on weekends or holidays. Hospitalizations in the derivation (2013-2014) and validation (2015) group were essentially the same, except there was a slight drop in hospital length of stay (from a median of 4 days to 3 days) between the 2 periods.
Patient and hospital covariates importantly influenced the daily conditional probability of discharge (Figure 1). Patients admitted to the obstetrics/gynecology department were notably more likely to be discharged from hospital with no influence from the day of week. In contrast, the probability of discharge decreased notably on the weekends in the other departments. Patients on the ward were much more likely to be discharged than those in the intensive care unit, with increasing age associated with a decreased discharge likelihood in the former but not the latter patients. Finally, discharge probabilities varied only slightly between campuses at our hospital with discharge risk decreasing as severity of illness (as measured by LAPS) increased.
The TEND model contained 142 discharge risk strata (Appendix A). Weekend-holiday status had the strongest association with discharge probability (ie, it was the first splitting variable). The most complex discharge risk strata contained 6 covariates. The daily conditional probability of discharge during the first 2 weeks of hospitalization varied extensively between discharge risk strata (Figure 2). Overall, the conditional discharge probability increased from the first to the second day, remained relatively stable for several days, and then slowly decreased over time. However, this pattern and day-to-day variability differed extensively between risk strata.
The observed daily number of discharges in the validation cohort varied extensively (median 139; interquartile range [IQR] 95-160; range 39-214). The TEND model accurately predicted the daily number of discharges with the expected daily number being strongly associated with the observed number (adjusted R2 = 89.2%; P < 0.0001; Figure 3). Calibration decreased but remained significant when we limited the analyses by hospital campus (General: R2 = 46.3%; P < 0.0001; Civic: R2 = 47.9%; P < 0.0001; Heart Institute: R2 = 18.1%; P < 0.0001). The expected number of daily discharges was an unbiased estimator of the observed number of discharges (its parameter estimate in a linear regression model with the observed number of discharges as the outcome variable was 1.0005; 95% confidence interval, 0.9647-1.0363). The absolute difference in the observed and expected daily number of discharges was small (median 1.6; IQR −6.8 to 9.4; range −37 to 63.4) as was the relative difference (median 1.4%; IQR −5.5% to 7.1%; range −40.9% to 43.4%). The expected number of discharges was within 20% of the observed number of discharges in 95.1% of days in 2015.
DISCUSSION
Knowing how many patients will soon be discharged from the hospital should greatly facilitate hospital planning. This study showed that the TEND model used simple patient and hospitalization covariates to accurately predict the number of patients who will be discharged from hospital in the next day.
We believe that this study has several notable findings. First, we think that using a nonparametric approach to predicting the daily number of discharges importantly increased accuracy. This approach allowed us to generate expected likelihoods based on actual discharge probabilities at our hospital in the most recent 6 months of hospitalization-days within patients having discharge patterns that were very similar to the patient in question (ie, discharge risk strata, Appendix A). This ensured that trends in hospitalization habits were accounted for without the need of a period variable in our model. In addition, the lack of parameters in the model will make it easier to transplant it to other hospitals. Second, we think that the accuracy of the predictions were remarkable given the relative “crudeness” of our predictors. By using relatively simple factors, the TEND model was able to output accurate predictions for the number of daily discharges (Figure 3).
This study joins several others that have attempted to accomplish the difficult task of predicting the number of hospital discharges by using digitized data. Barnes et al.11 created a model using regression random forest methods in a single medical service within a hospital to predict the daily number of discharges with impressive accuracy (mean daily number of discharges observed 8.29, expected 8.51). Interestingly, the model in this study was more accurate at predicting discharge likelihood than physicians. Levin et al.12 derived a model using discrete time logistic regression to predict the likelihood of discharge from a pediatric intensive care unit, finding that physician orders (captured via electronic order entry) could be categorized and used to significantly increase the accuracy of discharge likelihood. This study demonstrates the potential opportunities within health-related data from hospital data warehouses to improve prediction. We believe that continued work in this field will result in the increased use of digital data to help hospital administrators manage patient beds more efficiently and effectively than currently used resource intensive manual methods.13,14
Several issues should be kept in mind when interpreting our findings. First, our analysis is limited to a single institution in Canada. It will be important to determine if the TEND model methodology generalizes to other hospitals in different jurisdictions. Such an external validation, especially in multiple hospitals, will be important to show that the TEND model methodology works in other facilities. Hospitals could implement the TEND model if they are able to record daily values for each of the variables required to assign patients to a discharge risk stratum (Appendix A) and calculate within each the daily probability of discharge. Hospitals could derive their own discharge risk strata to account for covariates, which we did not include in our study but could be influential, such as insurance status. These discharge risk estimates could also be incorporated into the electronic medical record or hospital dashboards (as long as the data required to generate the estimates are available). These interventions would permit the expected number of hospital discharges (and even the patient-level probability of discharge) to be calculated on a daily basis. Second, 2 potential biases could have influenced the identification of our discharge risk strata (Appendix A). In this process, we used survival tree methods to separate patient-days into clusters having progressively more homogenous discharge patterns. Each split was determined by using a proportional hazards model that ignored the competing risks of death in hospital. In addition, the model expressed age and LAPS as continuous variables, whereas these covariates had to be categorized to create our risk strata groupings. The strength of a covariate’s association with an outcome will decrease when a continuous variable is categorized.15 Both of these issues might have biased our final risk strata categorization (Appendix A). Third, we limited our model to include simple covariates whose values could be determined relatively easily within most hospital administrative data systems. While this increases the generalizability to other hospital information systems, we believe that the introduction of other covariates to the model—such as daily vital signs, laboratory results, medications, or time from operations—could increase prediction accuracy. Finally, it is uncertain whether or not knowing the predicted number of discharges will improve the efficiency of bed management within the hospital. It seems logical that an accurate prediction of the number of beds that will be made available in the next day should improve decisions regarding the number of patients who could be admitted electively to the hospital. It remains to be seen, however, whether this truly happens.
In summary, we found that the TEND model used a handful of patient and hospitalization factors to accurately predict the expected number of discharges from hospital in the next day. Further work is required to implement this model into our institution’s data warehouse and then determine whether this prediction will improve the efficiency of bed management at our hospital.
Disclosure: CvW is supported by a University of Ottawa Department of Medicine Clinician Scientist Chair. The authors have no conflicts of interest
Hospitals typically allocate beds based on historical patient volumes. If funding decreases, hospitals will usually try to maximize resource utilization by allocating beds to attain occupancies close to 100% for significant periods of time. This will invariably cause days in which hospital occupancy exceeds capacity, at which time critical entry points (such as the emergency department and operating room) will become blocked. This creates significant concerns over the patient quality of care.
Hospital administrators have very few options when hospital occupancy exceeds 100%. They could postpone admissions for “planned” cases, bring in additional staff to increase capacity, or instigate additional methods to increase hospital discharges such as expanding care resources in the community. All options are costly, bothersome, or cannot be actioned immediately. The need for these options could be minimized by enabling hospital administrators to make more informed decisions regarding hospital bed management by knowing the likely number of discharges in the next 24 hours.
Predicting the number of people who will be discharged in the next day can be approached in several ways. One approach would be to calculate each patient’s expected length of stay and then use the variation around that estimate to calculate each day’s discharge probability. Several studies have attempted to model hospital length of stay using a broad assortment of methodologies, but a mechanism to accurately predict this outcome has been elusive1,2 (with Verburg et al.3 concluding in their study’s abstract that “…it is difficult to predict length of stay…”). A second approach would be to use survival analysis methods to generate each patient’s hazard of discharge over time, which could be directly converted to an expected daily risk of discharge. However, this approach is complicated by the concurrent need to include time-dependent covariates and consider the competing risk of death in hospital, which can complicate survival modeling.4,5 A third approach would be the implementation of a longitudinal analysis using marginal models to predict the daily probability of discharge,6 but this method quickly overwhelms computer resources when large datasets are present.
In this study, we decided to use nonparametric models to predict the daily number of hospital discharges. We first identified patient groups with distinct discharge patterns. We then calculated the conditional daily discharge probability of patients in each of these groups. Finally, these conditional daily discharge probabilities were then summed for each hospital day to generate the expected number of discharges in the next 24 hours. This paper details the methods we used to create our model and the accuracy of its predictions.
METHODS
Study Setting and Databases Used for Analysis
The study took place at The Ottawa Hospital, a 1000-bed teaching hospital with 3 campuses that is the primary referral center in our region. The study was approved by our local research ethics board.
The Patient Registry Database records the date and time of admission for each patient (defined as the moment that a patient’s admission request is registered in the patient registration) and discharge (defined as the time when the patient’s discharge from hospital was entered into the patient registration) for hospital encounters. Emergency department encounters were also identified in the Patient Registry Database along with admission service, patient age and sex, and patient location throughout the admission. The Laboratory Database records all laboratory studies and results on all patients at the hospital.
Study Cohort
We used the Patient Registry Database to identify all people aged 1 year or more who were admitted to the hospital between January 1, 2013, and December 31, 2015. This time frame was selected to (i) ensure that data were complete; and (ii) complete calendar years of data were available for both derivation (patient-days in 2013-2014) and validation (2015) cohorts. Patients who were observed in the emergency room without admission to hospital were not included.
Study Outcome
The study outcome was the number of patients discharged from the hospital each day. For the analysis, the reference point for each day was 1 second past midnight; therefore, values for time-dependent covariates up to and including midnight were used to predict the number of discharges in the next 24 hours.
Study Covariates
Baseline (ie, time-independent) covariates included patient age and sex, admission service, hospital campus, whether or not the patient was admitted from the emergency department (all determined from the Patient Registry Database), and the Laboratory-based Acute Physiological Score (LAPS). The latter, which was calculated with the Laboratory Database using results for 14 tests (arterial pH, PaCO2, PaO2, anion gap, hematocrit, total white blood cell count, serum albumin, total bilirubin, creatinine, urea nitrogen, glucose, sodium, bicarbonate, and troponin I) measured in the 24-hour time frame preceding hospitalization, was derived by Escobar and colleagues7 to measure severity of illness and was subsequently validated in our hospital.8 The independent association of each laboratory perturbation with risk of death in hospital is reflected by the number of points assigned to each lab value with the total LAPS being the sum of these values. Time-dependent covariates included weekday in hospital and whether or not patients were in the intensive care unit.
Analysis
We used 3 stages to create a model to predict the daily expected number of discharges: we identified discharge risk strata containing patients having similar discharge patterns using data from patients in the derivation cohort (first stage); then, we generated the preliminary probability of discharge by determining the daily discharge probability in each discharge risk strata (second stage); finally, we modified the probability from the second stage based on the weekday and admission service and summed these probabilities to create the expected number of discharges on a particular date (third stage).
The first stage identified discharge risk strata based on the covariates listed above. This was determined by using a survival tree approach9 with proportional hazard regression models to generate the “splits.” These models were offered all covariates listed in the Study Covariates section. Admission service was clustered within 4 departments (obstetrics/gynecology, psychiatry, surgery, and medicine) and day of week was “binarized” into weekday/weekend-holiday (because the use of categorical variables with large numbers of groups can “stunt” regression trees due to small numbers of patients—and, therefore, statistical power—in each subgroup). The proportional hazards model identified the covariate having the strongest association with time to discharge (based on the Wald X2 value divided by the degrees of freedom). This variable was then used to split the cohort into subgroups (with continuous covariates being categorized into quartiles). The proportional hazards model was then repeated in each subgroup (with the previous splitting variable[s] excluded from the model). This process continued until no variable was associated with time to discharge with a P value less than .0001. This survival-tree was then used to cluster all patients into distinct discharge risk strata.
In the second stage, we generated the preliminary probability of discharge for a specific date. This was calculated by assigning all patients in hospital to their discharge risk strata (Appendix). We then measured the probability of discharge on each hospitalization day in all discharge risk strata using data from the previous 180 days (we only used the prior 180 days of data to account for temporal changes in hospital discharge patterns). For example, consider a 75-year-old patient on her third hospital day under obstetrics/gynecology on December 19, 2015 (a Saturday). This patient would be assigned to risk stratum #133 (Appendix A). We then measured the probability of discharge of all patients in this discharge risk stratum hospitalized in the previous 6 months (ie, between June 22, 2015, and December 18, 2015) on each hospital day. For risk stratum #133, the probability of discharge on hospital day 3 was 0.1111; therefore, our sample patient’s preliminary expected discharge probability was 0.1111.
To attain stable daily discharge probability estimates, a minimum of 50 patients per discharge risk stratum-hospitalization day combination was required. If there were less than 50 patients for a particular hospitalization day in a particular discharge risk stratum, we grouped hospitalization days in that risk stratum together until the minimum of 50 patients was collected.
The third (and final) stage accounted for the lack of granularity when we created the discharge risk strata in the first stage. As we mentioned above, admission service was clustered into 4 departments and the day of week was clustered into weekend/weekday. However, important variations in discharge probabilities could still exist within departments and between particular days of the week.10 Therefore, we created a correction factor to adjust the preliminary expected number of discharges based on the admission division and day of week. This correction factor used data from the 180 days prior to the analysis date within which the expected daily number of discharges was calculated (using the methods above). The correction factor was the relative difference between the observed and expected number of discharges within each division-day of week grouping.
For example, to calculate the correction factor for our sample patient presented above (75-year-old patient on hospital day 3 under gynecology on Saturday, December 19, 2015), we measured the observed number of discharges from gynecology on Saturdays between June 22, 2015, and December 18, 2015, (n = 206) and the expected number of discharges (n = 195.255) resulting in a correction factor of (observed-expected)/expected = (195.255-206)/195.206 = 0.05503. Therefore, the final expected discharge probability for our sample patient was 0.1111+0.1111*0.05503=0.1172. The expected number of discharges on a particular date was the preliminary expected number of discharges on that date (generated in the second stage) multiplied by the correction factor for the corresponding division-day or week group.
RESULTS
There were 192,859 admissions involving patients more than 1 year of age that spent at least part of their hospitalization between January 1, 2013, and December 31, 2015 (Table). Patients were middle-aged and slightly female predominant, with about half being admitted from the emergency department. Approximately 80% of admissions were to surgical or medical services. More than 95% of admissions ended with a discharge from the hospital with the remainder ending in a death. Almost 30% of hospitalization days occurred on weekends or holidays. Hospitalizations in the derivation (2013-2014) and validation (2015) group were essentially the same, except there was a slight drop in hospital length of stay (from a median of 4 days to 3 days) between the 2 periods.
Patient and hospital covariates importantly influenced the daily conditional probability of discharge (Figure 1). Patients admitted to the obstetrics/gynecology department were notably more likely to be discharged from hospital with no influence from the day of week. In contrast, the probability of discharge decreased notably on the weekends in the other departments. Patients on the ward were much more likely to be discharged than those in the intensive care unit, with increasing age associated with a decreased discharge likelihood in the former but not the latter patients. Finally, discharge probabilities varied only slightly between campuses at our hospital with discharge risk decreasing as severity of illness (as measured by LAPS) increased.
The TEND model contained 142 discharge risk strata (Appendix A). Weekend-holiday status had the strongest association with discharge probability (ie, it was the first splitting variable). The most complex discharge risk strata contained 6 covariates. The daily conditional probability of discharge during the first 2 weeks of hospitalization varied extensively between discharge risk strata (Figure 2). Overall, the conditional discharge probability increased from the first to the second day, remained relatively stable for several days, and then slowly decreased over time. However, this pattern and day-to-day variability differed extensively between risk strata.
The observed daily number of discharges in the validation cohort varied extensively (median 139; interquartile range [IQR] 95-160; range 39-214). The TEND model accurately predicted the daily number of discharges with the expected daily number being strongly associated with the observed number (adjusted R2 = 89.2%; P < 0.0001; Figure 3). Calibration decreased but remained significant when we limited the analyses by hospital campus (General: R2 = 46.3%; P < 0.0001; Civic: R2 = 47.9%; P < 0.0001; Heart Institute: R2 = 18.1%; P < 0.0001). The expected number of daily discharges was an unbiased estimator of the observed number of discharges (its parameter estimate in a linear regression model with the observed number of discharges as the outcome variable was 1.0005; 95% confidence interval, 0.9647-1.0363). The absolute difference in the observed and expected daily number of discharges was small (median 1.6; IQR −6.8 to 9.4; range −37 to 63.4) as was the relative difference (median 1.4%; IQR −5.5% to 7.1%; range −40.9% to 43.4%). The expected number of discharges was within 20% of the observed number of discharges in 95.1% of days in 2015.
DISCUSSION
Knowing how many patients will soon be discharged from the hospital should greatly facilitate hospital planning. This study showed that the TEND model used simple patient and hospitalization covariates to accurately predict the number of patients who will be discharged from hospital in the next day.
We believe that this study has several notable findings. First, we think that using a nonparametric approach to predicting the daily number of discharges importantly increased accuracy. This approach allowed us to generate expected likelihoods based on actual discharge probabilities at our hospital in the most recent 6 months of hospitalization-days within patients having discharge patterns that were very similar to the patient in question (ie, discharge risk strata, Appendix A). This ensured that trends in hospitalization habits were accounted for without the need of a period variable in our model. In addition, the lack of parameters in the model will make it easier to transplant it to other hospitals. Second, we think that the accuracy of the predictions were remarkable given the relative “crudeness” of our predictors. By using relatively simple factors, the TEND model was able to output accurate predictions for the number of daily discharges (Figure 3).
This study joins several others that have attempted to accomplish the difficult task of predicting the number of hospital discharges by using digitized data. Barnes et al.11 created a model using regression random forest methods in a single medical service within a hospital to predict the daily number of discharges with impressive accuracy (mean daily number of discharges observed 8.29, expected 8.51). Interestingly, the model in this study was more accurate at predicting discharge likelihood than physicians. Levin et al.12 derived a model using discrete time logistic regression to predict the likelihood of discharge from a pediatric intensive care unit, finding that physician orders (captured via electronic order entry) could be categorized and used to significantly increase the accuracy of discharge likelihood. This study demonstrates the potential opportunities within health-related data from hospital data warehouses to improve prediction. We believe that continued work in this field will result in the increased use of digital data to help hospital administrators manage patient beds more efficiently and effectively than currently used resource intensive manual methods.13,14
Several issues should be kept in mind when interpreting our findings. First, our analysis is limited to a single institution in Canada. It will be important to determine if the TEND model methodology generalizes to other hospitals in different jurisdictions. Such an external validation, especially in multiple hospitals, will be important to show that the TEND model methodology works in other facilities. Hospitals could implement the TEND model if they are able to record daily values for each of the variables required to assign patients to a discharge risk stratum (Appendix A) and calculate within each the daily probability of discharge. Hospitals could derive their own discharge risk strata to account for covariates, which we did not include in our study but could be influential, such as insurance status. These discharge risk estimates could also be incorporated into the electronic medical record or hospital dashboards (as long as the data required to generate the estimates are available). These interventions would permit the expected number of hospital discharges (and even the patient-level probability of discharge) to be calculated on a daily basis. Second, 2 potential biases could have influenced the identification of our discharge risk strata (Appendix A). In this process, we used survival tree methods to separate patient-days into clusters having progressively more homogenous discharge patterns. Each split was determined by using a proportional hazards model that ignored the competing risks of death in hospital. In addition, the model expressed age and LAPS as continuous variables, whereas these covariates had to be categorized to create our risk strata groupings. The strength of a covariate’s association with an outcome will decrease when a continuous variable is categorized.15 Both of these issues might have biased our final risk strata categorization (Appendix A). Third, we limited our model to include simple covariates whose values could be determined relatively easily within most hospital administrative data systems. While this increases the generalizability to other hospital information systems, we believe that the introduction of other covariates to the model—such as daily vital signs, laboratory results, medications, or time from operations—could increase prediction accuracy. Finally, it is uncertain whether or not knowing the predicted number of discharges will improve the efficiency of bed management within the hospital. It seems logical that an accurate prediction of the number of beds that will be made available in the next day should improve decisions regarding the number of patients who could be admitted electively to the hospital. It remains to be seen, however, whether this truly happens.
In summary, we found that the TEND model used a handful of patient and hospitalization factors to accurately predict the expected number of discharges from hospital in the next day. Further work is required to implement this model into our institution’s data warehouse and then determine whether this prediction will improve the efficiency of bed management at our hospital.
Disclosure: CvW is supported by a University of Ottawa Department of Medicine Clinician Scientist Chair. The authors have no conflicts of interest
1. Austin PC, Rothwell DM, Tu JV. A comparison of statistical modeling strategies for analyzing length of stay after CABG surgery. Health Serv Outcomes Res Methodol. 2002;3:107-133.
2. Moran JL, Solomon PJ. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand intensive care adult patient data-base, 2008-2009. BMC Med Res Methodol. 2012;12:68. PubMed
3. Verburg IWM, de Keizer NF, de Jonge E, Peek N. Comparison of regression methods for modeling intensive care length of stay. PLoS One. 2014;9:e109684. PubMed
4. Beyersmann J, Schumacher M. Time-dependent covariates in the proportional subdistribution hazards model for competing risks. Biostatistics. 2008;9:765-776. PubMed
5. Latouche A, Porcher R, Chevret S. A note on including time-dependent covariate in regression model for competing risks data. Biom J. 2005;47:807-814. PubMed
6. Fitzmaurice GM, Laird NM, Ware JH. Marginal models: generalized estimating equations. Applied Longitudinal Analysis. 2nd ed. John Wiley & Sons; 2011;353-394.
7. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232-239. PubMed
8. van Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798-803. PubMed
9. Bou-Hamad I, Larocque D, Ben-Ameur H. A review of survival trees. Statist Surv. 2011;44-71.
10. van Walraven C, Bell CM. Risk of death or readmission among people discharged from hospital on Fridays. CMAJ. 2002;166:1672-1673. PubMed
11. Barnes S, Hamrock E, Toerper M, Siddiqui S, Levin S. Real-time prediction of inpatient length of stay for discharge prioritization. J Am Med Inform Assoc. 2016;23:e2-e10. PubMed
12. Levin SRP, Harley ETB, Fackler JCM, et al. Real-time forecasting of pediatric intensive care unit length of stay using computerized provider orders. Crit Care Med. 2012;40:3058-3064. PubMed
13. Resar R, Nolan K, Kaczynski D, Jensen K. Using real-time demand capacity management to improve hospitalwide patient flow. Jt Comm J Qual Patient Saf. 2011;37:217-227. PubMed
14. de Grood A, Blades K, Pendharkar SR. A review of discharge prediction processes in acute care hospitals. Healthc Policy. 2016;12:105-115. PubMed
15. van Walraven C, Hart RG. Leave ‘em alone - why continuous variables should be analyzed as such. Neuroepidemiology 2008;30:138-139. PubMed
1. Austin PC, Rothwell DM, Tu JV. A comparison of statistical modeling strategies for analyzing length of stay after CABG surgery. Health Serv Outcomes Res Methodol. 2002;3:107-133.
2. Moran JL, Solomon PJ. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand intensive care adult patient data-base, 2008-2009. BMC Med Res Methodol. 2012;12:68. PubMed
3. Verburg IWM, de Keizer NF, de Jonge E, Peek N. Comparison of regression methods for modeling intensive care length of stay. PLoS One. 2014;9:e109684. PubMed
4. Beyersmann J, Schumacher M. Time-dependent covariates in the proportional subdistribution hazards model for competing risks. Biostatistics. 2008;9:765-776. PubMed
5. Latouche A, Porcher R, Chevret S. A note on including time-dependent covariate in regression model for competing risks data. Biom J. 2005;47:807-814. PubMed
6. Fitzmaurice GM, Laird NM, Ware JH. Marginal models: generalized estimating equations. Applied Longitudinal Analysis. 2nd ed. John Wiley & Sons; 2011;353-394.
7. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232-239. PubMed
8. van Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798-803. PubMed
9. Bou-Hamad I, Larocque D, Ben-Ameur H. A review of survival trees. Statist Surv. 2011;44-71.
10. van Walraven C, Bell CM. Risk of death or readmission among people discharged from hospital on Fridays. CMAJ. 2002;166:1672-1673. PubMed
11. Barnes S, Hamrock E, Toerper M, Siddiqui S, Levin S. Real-time prediction of inpatient length of stay for discharge prioritization. J Am Med Inform Assoc. 2016;23:e2-e10. PubMed
12. Levin SRP, Harley ETB, Fackler JCM, et al. Real-time forecasting of pediatric intensive care unit length of stay using computerized provider orders. Crit Care Med. 2012;40:3058-3064. PubMed
13. Resar R, Nolan K, Kaczynski D, Jensen K. Using real-time demand capacity management to improve hospitalwide patient flow. Jt Comm J Qual Patient Saf. 2011;37:217-227. PubMed
14. de Grood A, Blades K, Pendharkar SR. A review of discharge prediction processes in acute care hospitals. Healthc Policy. 2016;12:105-115. PubMed
15. van Walraven C, Hart RG. Leave ‘em alone - why continuous variables should be analyzed as such. Neuroepidemiology 2008;30:138-139. PubMed
© 2018 Society of Hospital Medicine
Opening the door to gene editing?
In early August,
in Portland, Ore. The scale and success of such experimentation with human embryos is unprecedented in the United States. Given the highly experimental nature of fertility clinics in the United States and abroad, many suggest that these findings open the door to designer babies. A careful read of the report, however, indicates that the door is still quite closed, perhaps cracked open just a little.The research team used a new method of cutting the genome, called CRISPR-Cas9. CRISPR utilizes two key components that the team combined in a test tube together: a Cas9 protein that can cut the DNA and a synthetic RNA that can guide the protein to cut a 20-letter sequence in the human genome specifically. In these experiments, the Cas9-RNA protein was designed to cut a pathogenic mutation in the MYBPC3 gene, which can cause hypertrophic cardiomyopathy. The research team could not obtain human zygotes with this mutation on both copies of the genome (a rare homozygous genotype). Such zygotes would have the most severe phenotype and be the most compelling test case for CRISPR. Instead, they focused on gene editing heterozygous human zygotes that have one normal maternal copy of the MYBPC3 gene and one pathogenic paternal copy. The heterozygous zygotes were produced by the research team via in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) using sperm donated by males carrying the pathogenic mutation (Nature. 2017 Aug 2. doi: 10.1038/nature23305).
When researchers injected the Cas9-RNA protein targeting the mutation into already fertilized zygotes, they found that 67% of the resulting embryos had two normal copies of the MYBPC3 gene. Without gene editing, approximately 50% of the embryos would have two normal copies, because the male sperm donor would produce equal numbers of sperm with normal and pathogenic genotypes. Thus, editing likely corrected only about 17% of the embryos that would have otherwise had one pathogenic paternal mutation. Thirty-six percent of embryos had additional mutations from imprecise gene editing. Further, some of the gene edits and additional mutations were mosaic, meaning that the resulting embryo harbored many different genotypes.
To overcome these challenges, the research team precisely controlled the timing of CRISPR injection to coincide with fertilization. With controlled timing, gene editing was restricted to only the paternal pathogenic mutation, resulting in 72% of all injected embryos having two normal copies of the gene in all cells without any mosaicism. Whole genome sequencing revealed no additional mutations above the detection limit of the assay. Finally, preimplantation development proceeded normally to the blastocyst stage, suggesting that the edited embryos have no functional deficits from the procedure.
A surprising finding was that new sequences could not be put into the embryo. The research team had coinjected a synthetic DNA template that differed from the normal maternal copy, but never saw this sequence incorporated into any embryo. Instead, the zygote utilized the maternal copy of the gene with the normal sequence as a template for repairing the DNA cut in the paternal copy produced by CRISPR. The biology behind this repair process is poorly understood and has not been previously reported with other human cell types. These observations suggest that we cannot easily “write” our genome. Instead, our vocabulary is limited to what is already within either the maternal or paternal copy of the genome. In other words, designer babies are not around the corner. While preimplantation genetic diagnosis (PGD) is still currently the safest way to avoid passing on autosomal dominant mutations, these new findings could enable correction of such mutations within IVF embryos, resulting in a larger pool of embryos for IVF clinics to work with.
Apart from these technical challenges, the National Academies has not given a green light to implant edited human embryos. Instead, the organization calls for several requirements to be met, including “broad societal consensus” on the need for this type of intervention. While it is not clear whether or how consensus could be achieved, it is clear that scientists, clinicians, and patients will need help from the rest of society for this research to have an impact clinically.
Dr. Saha is assistant professor of biomedical engineering at the Wisconsin Institute for Discovery at the University of Wisconsin, Madison. His lab works on gene editing of human cells. He has patent filings through the Wisconsin Alumni Research Foundation on gene editing inventions.
In early August,
in Portland, Ore. The scale and success of such experimentation with human embryos is unprecedented in the United States. Given the highly experimental nature of fertility clinics in the United States and abroad, many suggest that these findings open the door to designer babies. A careful read of the report, however, indicates that the door is still quite closed, perhaps cracked open just a little.The research team used a new method of cutting the genome, called CRISPR-Cas9. CRISPR utilizes two key components that the team combined in a test tube together: a Cas9 protein that can cut the DNA and a synthetic RNA that can guide the protein to cut a 20-letter sequence in the human genome specifically. In these experiments, the Cas9-RNA protein was designed to cut a pathogenic mutation in the MYBPC3 gene, which can cause hypertrophic cardiomyopathy. The research team could not obtain human zygotes with this mutation on both copies of the genome (a rare homozygous genotype). Such zygotes would have the most severe phenotype and be the most compelling test case for CRISPR. Instead, they focused on gene editing heterozygous human zygotes that have one normal maternal copy of the MYBPC3 gene and one pathogenic paternal copy. The heterozygous zygotes were produced by the research team via in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) using sperm donated by males carrying the pathogenic mutation (Nature. 2017 Aug 2. doi: 10.1038/nature23305).
When researchers injected the Cas9-RNA protein targeting the mutation into already fertilized zygotes, they found that 67% of the resulting embryos had two normal copies of the MYBPC3 gene. Without gene editing, approximately 50% of the embryos would have two normal copies, because the male sperm donor would produce equal numbers of sperm with normal and pathogenic genotypes. Thus, editing likely corrected only about 17% of the embryos that would have otherwise had one pathogenic paternal mutation. Thirty-six percent of embryos had additional mutations from imprecise gene editing. Further, some of the gene edits and additional mutations were mosaic, meaning that the resulting embryo harbored many different genotypes.
To overcome these challenges, the research team precisely controlled the timing of CRISPR injection to coincide with fertilization. With controlled timing, gene editing was restricted to only the paternal pathogenic mutation, resulting in 72% of all injected embryos having two normal copies of the gene in all cells without any mosaicism. Whole genome sequencing revealed no additional mutations above the detection limit of the assay. Finally, preimplantation development proceeded normally to the blastocyst stage, suggesting that the edited embryos have no functional deficits from the procedure.
A surprising finding was that new sequences could not be put into the embryo. The research team had coinjected a synthetic DNA template that differed from the normal maternal copy, but never saw this sequence incorporated into any embryo. Instead, the zygote utilized the maternal copy of the gene with the normal sequence as a template for repairing the DNA cut in the paternal copy produced by CRISPR. The biology behind this repair process is poorly understood and has not been previously reported with other human cell types. These observations suggest that we cannot easily “write” our genome. Instead, our vocabulary is limited to what is already within either the maternal or paternal copy of the genome. In other words, designer babies are not around the corner. While preimplantation genetic diagnosis (PGD) is still currently the safest way to avoid passing on autosomal dominant mutations, these new findings could enable correction of such mutations within IVF embryos, resulting in a larger pool of embryos for IVF clinics to work with.
Apart from these technical challenges, the National Academies has not given a green light to implant edited human embryos. Instead, the organization calls for several requirements to be met, including “broad societal consensus” on the need for this type of intervention. While it is not clear whether or how consensus could be achieved, it is clear that scientists, clinicians, and patients will need help from the rest of society for this research to have an impact clinically.
Dr. Saha is assistant professor of biomedical engineering at the Wisconsin Institute for Discovery at the University of Wisconsin, Madison. His lab works on gene editing of human cells. He has patent filings through the Wisconsin Alumni Research Foundation on gene editing inventions.
In early August,
in Portland, Ore. The scale and success of such experimentation with human embryos is unprecedented in the United States. Given the highly experimental nature of fertility clinics in the United States and abroad, many suggest that these findings open the door to designer babies. A careful read of the report, however, indicates that the door is still quite closed, perhaps cracked open just a little.The research team used a new method of cutting the genome, called CRISPR-Cas9. CRISPR utilizes two key components that the team combined in a test tube together: a Cas9 protein that can cut the DNA and a synthetic RNA that can guide the protein to cut a 20-letter sequence in the human genome specifically. In these experiments, the Cas9-RNA protein was designed to cut a pathogenic mutation in the MYBPC3 gene, which can cause hypertrophic cardiomyopathy. The research team could not obtain human zygotes with this mutation on both copies of the genome (a rare homozygous genotype). Such zygotes would have the most severe phenotype and be the most compelling test case for CRISPR. Instead, they focused on gene editing heterozygous human zygotes that have one normal maternal copy of the MYBPC3 gene and one pathogenic paternal copy. The heterozygous zygotes were produced by the research team via in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) using sperm donated by males carrying the pathogenic mutation (Nature. 2017 Aug 2. doi: 10.1038/nature23305).
When researchers injected the Cas9-RNA protein targeting the mutation into already fertilized zygotes, they found that 67% of the resulting embryos had two normal copies of the MYBPC3 gene. Without gene editing, approximately 50% of the embryos would have two normal copies, because the male sperm donor would produce equal numbers of sperm with normal and pathogenic genotypes. Thus, editing likely corrected only about 17% of the embryos that would have otherwise had one pathogenic paternal mutation. Thirty-six percent of embryos had additional mutations from imprecise gene editing. Further, some of the gene edits and additional mutations were mosaic, meaning that the resulting embryo harbored many different genotypes.
To overcome these challenges, the research team precisely controlled the timing of CRISPR injection to coincide with fertilization. With controlled timing, gene editing was restricted to only the paternal pathogenic mutation, resulting in 72% of all injected embryos having two normal copies of the gene in all cells without any mosaicism. Whole genome sequencing revealed no additional mutations above the detection limit of the assay. Finally, preimplantation development proceeded normally to the blastocyst stage, suggesting that the edited embryos have no functional deficits from the procedure.
A surprising finding was that new sequences could not be put into the embryo. The research team had coinjected a synthetic DNA template that differed from the normal maternal copy, but never saw this sequence incorporated into any embryo. Instead, the zygote utilized the maternal copy of the gene with the normal sequence as a template for repairing the DNA cut in the paternal copy produced by CRISPR. The biology behind this repair process is poorly understood and has not been previously reported with other human cell types. These observations suggest that we cannot easily “write” our genome. Instead, our vocabulary is limited to what is already within either the maternal or paternal copy of the genome. In other words, designer babies are not around the corner. While preimplantation genetic diagnosis (PGD) is still currently the safest way to avoid passing on autosomal dominant mutations, these new findings could enable correction of such mutations within IVF embryos, resulting in a larger pool of embryos for IVF clinics to work with.
Apart from these technical challenges, the National Academies has not given a green light to implant edited human embryos. Instead, the organization calls for several requirements to be met, including “broad societal consensus” on the need for this type of intervention. While it is not clear whether or how consensus could be achieved, it is clear that scientists, clinicians, and patients will need help from the rest of society for this research to have an impact clinically.
Dr. Saha is assistant professor of biomedical engineering at the Wisconsin Institute for Discovery at the University of Wisconsin, Madison. His lab works on gene editing of human cells. He has patent filings through the Wisconsin Alumni Research Foundation on gene editing inventions.
Neurologist to endocrinologists: Listen to your patients’ feet
SAN DIEGO – Listen to the feet of your patients, and don’t just focus on glucose control as a way to combat diabetic neuropathy, according to Eva L. Feldman, MD, PhD.
Regular foot exams are crucial for patients with diabetes because of the high risk of diabetic neuropathy (DN), Dr. Feldman, professor of neurology and director of the Program for Neurology Research and Discovery at University of Michigan, Ann Arbor, said in a presentation at the annual scientific sessions of the American Diabetes Association.
“Diabetic neuropathy has some important clinical consequences in a patient’s life. There’s clearly impaired function, a lower quality of life, and increased mortality. There’s also a high association between DN and cardiovascular disease and a high risk of amputation,” said Dr. Feldman, coauthor of the ADA’s new clinical guidelines for diabetic neuropathy (Diabetes Care. 2017;40[1]:136-54).
And in patients with type 2 diabetes, “diabetic neuropathy is not just hyperglycemia, which is what we’ve focused all our efforts on up until the past 5 years. There is also a role for dyslipidemia and other metabolic impairments,” she said.
In a follow-up interview, Dr. Feldman elaborated on a common misconception about DN, simple tools for foot exams by endocrinologists, and how to know when it’s time for a referral to a neurologist or podiatrist.
Q: What do you think physicians/endocrinologists misunderstand about diabetic neuropathy?
A: Commonly, physicians think if patients with diabetes do not complain of pain or numbness, they do not have DN. This simply isn’t correct. Over 80% of patients with DN have insensate feet – they simply do not have feeling in their feet. Physicians must examine a patient’s foot at least once a year to ensure the patient has not developed DN.
Q: What should endocrinologists understand about how diabetic neuropathy develops?
A: We know that excellent glucose control has a significant impact on DN in patients with type 1 diabetes. In patients with type 2 diabetes, we know that excellent glucose control plays a much less significant role. While it’s important, it must be coupled with control of other components of metabolic syndrome – elevated blood lipids, obesity, and hypertension.
A: Take a 126 Hz tuning fork and determine if the patient can feel vibration on the joint of the great toe for at least 10 seconds. Then take a 10-gram filament and a pin and determine if the patient can feel both of these instruments when they are applied to the joint of the great toe. Some physicians also take a 10-gram filament and apply it to the sole of the foot. I would not suggest using a pin on the sole of the foot.
Q: What else should they look for when they inspect feet? And how often would you recommend that endocrinologists do this per patient?
A: Inspection for callous formation, fissure formation, and fungal infections is important, and a foot exam should be done once yearly.
Q: What about testing whether patents can feel temperature?
A: The pin tests the same class of nerve fibers so this is routinely not done in an endocrinologist’s office.
Q: When should endocrinologists refer out for neuropathy?
A: Endocrinologists can treat DN by treating the diabetic condition and, in type 2 diabetes, the metabolic syndrome. Referral to a neurologist is indicated if there are atypical symptoms or signs, such as motor impairment that’s greater than sensory impairment, a significant asymmetry, or a very rapid onset. All patients with severe DN should be under the care of a podiatrist to prevent the development of nonhealing wounds and ulcers.
Q: What have you learned about how diabetic neuropathy affects the lives of patients?
A: DN definitely affects the quality of a patient’s life, not only in terms of work productivity and ability to perform activities of daily living. Quality of life and general enjoyment of life can frequently be adversely affected.
Q: How successful is treatment for diabetic neuropathy?
A: Treatment for pain can be very successful, and we have outlined a protocol in our recent ADA guidelines. For patients with uncontrollable pain, frequently a referral to a pain clinic is in order.
Dr. Feldman reports no relevant disclosures.
SAN DIEGO – Listen to the feet of your patients, and don’t just focus on glucose control as a way to combat diabetic neuropathy, according to Eva L. Feldman, MD, PhD.
Regular foot exams are crucial for patients with diabetes because of the high risk of diabetic neuropathy (DN), Dr. Feldman, professor of neurology and director of the Program for Neurology Research and Discovery at University of Michigan, Ann Arbor, said in a presentation at the annual scientific sessions of the American Diabetes Association.
“Diabetic neuropathy has some important clinical consequences in a patient’s life. There’s clearly impaired function, a lower quality of life, and increased mortality. There’s also a high association between DN and cardiovascular disease and a high risk of amputation,” said Dr. Feldman, coauthor of the ADA’s new clinical guidelines for diabetic neuropathy (Diabetes Care. 2017;40[1]:136-54).
And in patients with type 2 diabetes, “diabetic neuropathy is not just hyperglycemia, which is what we’ve focused all our efforts on up until the past 5 years. There is also a role for dyslipidemia and other metabolic impairments,” she said.
In a follow-up interview, Dr. Feldman elaborated on a common misconception about DN, simple tools for foot exams by endocrinologists, and how to know when it’s time for a referral to a neurologist or podiatrist.
Q: What do you think physicians/endocrinologists misunderstand about diabetic neuropathy?
A: Commonly, physicians think if patients with diabetes do not complain of pain or numbness, they do not have DN. This simply isn’t correct. Over 80% of patients with DN have insensate feet – they simply do not have feeling in their feet. Physicians must examine a patient’s foot at least once a year to ensure the patient has not developed DN.
Q: What should endocrinologists understand about how diabetic neuropathy develops?
A: We know that excellent glucose control has a significant impact on DN in patients with type 1 diabetes. In patients with type 2 diabetes, we know that excellent glucose control plays a much less significant role. While it’s important, it must be coupled with control of other components of metabolic syndrome – elevated blood lipids, obesity, and hypertension.
A: Take a 126 Hz tuning fork and determine if the patient can feel vibration on the joint of the great toe for at least 10 seconds. Then take a 10-gram filament and a pin and determine if the patient can feel both of these instruments when they are applied to the joint of the great toe. Some physicians also take a 10-gram filament and apply it to the sole of the foot. I would not suggest using a pin on the sole of the foot.
Q: What else should they look for when they inspect feet? And how often would you recommend that endocrinologists do this per patient?
A: Inspection for callous formation, fissure formation, and fungal infections is important, and a foot exam should be done once yearly.
Q: What about testing whether patents can feel temperature?
A: The pin tests the same class of nerve fibers so this is routinely not done in an endocrinologist’s office.
Q: When should endocrinologists refer out for neuropathy?
A: Endocrinologists can treat DN by treating the diabetic condition and, in type 2 diabetes, the metabolic syndrome. Referral to a neurologist is indicated if there are atypical symptoms or signs, such as motor impairment that’s greater than sensory impairment, a significant asymmetry, or a very rapid onset. All patients with severe DN should be under the care of a podiatrist to prevent the development of nonhealing wounds and ulcers.
Q: What have you learned about how diabetic neuropathy affects the lives of patients?
A: DN definitely affects the quality of a patient’s life, not only in terms of work productivity and ability to perform activities of daily living. Quality of life and general enjoyment of life can frequently be adversely affected.
Q: How successful is treatment for diabetic neuropathy?
A: Treatment for pain can be very successful, and we have outlined a protocol in our recent ADA guidelines. For patients with uncontrollable pain, frequently a referral to a pain clinic is in order.
Dr. Feldman reports no relevant disclosures.
SAN DIEGO – Listen to the feet of your patients, and don’t just focus on glucose control as a way to combat diabetic neuropathy, according to Eva L. Feldman, MD, PhD.
Regular foot exams are crucial for patients with diabetes because of the high risk of diabetic neuropathy (DN), Dr. Feldman, professor of neurology and director of the Program for Neurology Research and Discovery at University of Michigan, Ann Arbor, said in a presentation at the annual scientific sessions of the American Diabetes Association.
“Diabetic neuropathy has some important clinical consequences in a patient’s life. There’s clearly impaired function, a lower quality of life, and increased mortality. There’s also a high association between DN and cardiovascular disease and a high risk of amputation,” said Dr. Feldman, coauthor of the ADA’s new clinical guidelines for diabetic neuropathy (Diabetes Care. 2017;40[1]:136-54).
And in patients with type 2 diabetes, “diabetic neuropathy is not just hyperglycemia, which is what we’ve focused all our efforts on up until the past 5 years. There is also a role for dyslipidemia and other metabolic impairments,” she said.
In a follow-up interview, Dr. Feldman elaborated on a common misconception about DN, simple tools for foot exams by endocrinologists, and how to know when it’s time for a referral to a neurologist or podiatrist.
Q: What do you think physicians/endocrinologists misunderstand about diabetic neuropathy?
A: Commonly, physicians think if patients with diabetes do not complain of pain or numbness, they do not have DN. This simply isn’t correct. Over 80% of patients with DN have insensate feet – they simply do not have feeling in their feet. Physicians must examine a patient’s foot at least once a year to ensure the patient has not developed DN.
Q: What should endocrinologists understand about how diabetic neuropathy develops?
A: We know that excellent glucose control has a significant impact on DN in patients with type 1 diabetes. In patients with type 2 diabetes, we know that excellent glucose control plays a much less significant role. While it’s important, it must be coupled with control of other components of metabolic syndrome – elevated blood lipids, obesity, and hypertension.
A: Take a 126 Hz tuning fork and determine if the patient can feel vibration on the joint of the great toe for at least 10 seconds. Then take a 10-gram filament and a pin and determine if the patient can feel both of these instruments when they are applied to the joint of the great toe. Some physicians also take a 10-gram filament and apply it to the sole of the foot. I would not suggest using a pin on the sole of the foot.
Q: What else should they look for when they inspect feet? And how often would you recommend that endocrinologists do this per patient?
A: Inspection for callous formation, fissure formation, and fungal infections is important, and a foot exam should be done once yearly.
Q: What about testing whether patents can feel temperature?
A: The pin tests the same class of nerve fibers so this is routinely not done in an endocrinologist’s office.
Q: When should endocrinologists refer out for neuropathy?
A: Endocrinologists can treat DN by treating the diabetic condition and, in type 2 diabetes, the metabolic syndrome. Referral to a neurologist is indicated if there are atypical symptoms or signs, such as motor impairment that’s greater than sensory impairment, a significant asymmetry, or a very rapid onset. All patients with severe DN should be under the care of a podiatrist to prevent the development of nonhealing wounds and ulcers.
Q: What have you learned about how diabetic neuropathy affects the lives of patients?
A: DN definitely affects the quality of a patient’s life, not only in terms of work productivity and ability to perform activities of daily living. Quality of life and general enjoyment of life can frequently be adversely affected.
Q: How successful is treatment for diabetic neuropathy?
A: Treatment for pain can be very successful, and we have outlined a protocol in our recent ADA guidelines. For patients with uncontrollable pain, frequently a referral to a pain clinic is in order.
Dr. Feldman reports no relevant disclosures.
AT THE ADA ANNUAL SCIENTIFIC SESSIONS
Psoriasis, psoriatic arthritis research makes headway at GRAPPA meeting
The agenda for this year’s Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) annual meeting included sessions on juvenile psoriatic arthritis (PsA), the microbiome in psoriasis and PsA, and setting up longitudinal cohort studies. There was also a workshop to define the clinical fit and feasibility of PsA outcome measures for clinical trials and updates on several GRAPPA-associated projects.
The meeting, held July 13-15 in Amsterdam, opened with the annual trainee session. This year’s trainee session attracted more than 40 abstracts, of which 6 were selected for oral presentations on a variety of topics including outcome measure assessment, imaging, the microbiome in PsA, and T-cell subsets in synovial fluid.
The jPsA session also emphasized the need for efforts to better define this disease entity and to study outcomes related to it. The current International League of Associations for Rheumatology criteria for juvenile idiopathic arthritis split jPsA into other subgroups (for example, enthesitis-associated arthritis, undifferentiated arthritis, etc).
Three investigators – Matt Stoll, MD, PhD, Devy Zisman, MD, and Elizabeth Mellins, MD – also presented studies of the epidemiology of jPsA.
Among the most intriguing sessions at the GRAPPA meeting was a series of three talks by Jose Scher, MD, Hok Bing Thio, MD, PhD, and Dirk Elewaut, PhD, that introduced the audience to the complexity of the micro-organisms that call the human host “home” and the potential role in the development of inflammatory conditions, in particular psoriasis and PsA. These talks touched on the potential uses of the microbiome of the gut, skin, and oral mucosa in predicting therapy response and modulating the immune system.
Dafna Gladman, MD, led a session on “how to set up a cohort.” In this session, speakers provided a road map for setting up a longitudinal cohort and discussed opportunities and challenges along the way. This session provided a foundation for a meeting that followed the annual meeting, the GRAPPA Research Collaborative Network meeting (held July 15-16). The RCN meeting aimed to develop a plan for a GRAPPA research network supporting collaborative research to identify biomarkers and outcomes in psoriasis and PsA. Speakers/panelists from academics and industry kicked off the meeting with a discussion of prior experiences in establishing international cohort studies. Subsequent sessions presented individual aspects of beginning a longitudinal cohort study for biomarkers, including methods for sample collection, regulatory processes, and data collected at potential sites; capturing and harmonizing clinical data for such studies; and policies for publication of RCN studies.
This year’s workshop was focused on the GRAPPA-Outcome Measures in Rheumatology (OMERACT) working group’s plan to develop a Core Outcome Measure Set, a collection of outcome measurement instruments to be used in randomized, controlled trials (RCTs) for PsA. During the plenary session, the team presented a process for the group to evaluate instruments, including assessment of match to the domain or concept of interest, feasibility, construct validity, and discrimination (including reliability and the ability to distinguish between two groups such as responders and nonresponders). Breakout groups then discussed one of the six tools being considered and voted on match to the domain of interest and feasibility for RCTs.
In a skin session, ongoing efforts to standardize and simplify the measurement of skin psoriasis in both the clinic and RCTs were presented. While the Psoriasis Area and Severity Index (PASI) is the current standard for measuring psoriasis severity and response to therapy in RCTs, the PASI is challenging for use in clinical practice. Joseph Merola, MD, presented data to support the use of the psoriasis Physician Global Assessment (PGA) x body surface area (BSA), a simpler measure than the PASI, as a potential substitute for the PASI. Updates from the International Dermatology Outcome Measures board included reporting of the psoriasis core domain set and a summary of the PsA symptoms working group’s efforts to identify patient-reported outcomes to identify and measure PsA among patients enrolled in psoriasis RCTs. April Armstrong, MD, discussed the National Psoriasis Foundation’s efforts to develop treat-to-target goals for psoriasis. Finally, Laura Coates, MBChB, PhD, presented a report on her efforts to examine a variety of psoriasis cut points for minimal disease activity and very low disease activity outcomes.
Dr. Ogdie is director of the Penn Psoriatic Arthritis Clinic at the University of Pennsylvania, Philadelphia, and is a member of the GRAPPA Steering Committee.
The agenda for this year’s Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) annual meeting included sessions on juvenile psoriatic arthritis (PsA), the microbiome in psoriasis and PsA, and setting up longitudinal cohort studies. There was also a workshop to define the clinical fit and feasibility of PsA outcome measures for clinical trials and updates on several GRAPPA-associated projects.
The meeting, held July 13-15 in Amsterdam, opened with the annual trainee session. This year’s trainee session attracted more than 40 abstracts, of which 6 were selected for oral presentations on a variety of topics including outcome measure assessment, imaging, the microbiome in PsA, and T-cell subsets in synovial fluid.
The jPsA session also emphasized the need for efforts to better define this disease entity and to study outcomes related to it. The current International League of Associations for Rheumatology criteria for juvenile idiopathic arthritis split jPsA into other subgroups (for example, enthesitis-associated arthritis, undifferentiated arthritis, etc).
Three investigators – Matt Stoll, MD, PhD, Devy Zisman, MD, and Elizabeth Mellins, MD – also presented studies of the epidemiology of jPsA.
Among the most intriguing sessions at the GRAPPA meeting was a series of three talks by Jose Scher, MD, Hok Bing Thio, MD, PhD, and Dirk Elewaut, PhD, that introduced the audience to the complexity of the micro-organisms that call the human host “home” and the potential role in the development of inflammatory conditions, in particular psoriasis and PsA. These talks touched on the potential uses of the microbiome of the gut, skin, and oral mucosa in predicting therapy response and modulating the immune system.
Dafna Gladman, MD, led a session on “how to set up a cohort.” In this session, speakers provided a road map for setting up a longitudinal cohort and discussed opportunities and challenges along the way. This session provided a foundation for a meeting that followed the annual meeting, the GRAPPA Research Collaborative Network meeting (held July 15-16). The RCN meeting aimed to develop a plan for a GRAPPA research network supporting collaborative research to identify biomarkers and outcomes in psoriasis and PsA. Speakers/panelists from academics and industry kicked off the meeting with a discussion of prior experiences in establishing international cohort studies. Subsequent sessions presented individual aspects of beginning a longitudinal cohort study for biomarkers, including methods for sample collection, regulatory processes, and data collected at potential sites; capturing and harmonizing clinical data for such studies; and policies for publication of RCN studies.
This year’s workshop was focused on the GRAPPA-Outcome Measures in Rheumatology (OMERACT) working group’s plan to develop a Core Outcome Measure Set, a collection of outcome measurement instruments to be used in randomized, controlled trials (RCTs) for PsA. During the plenary session, the team presented a process for the group to evaluate instruments, including assessment of match to the domain or concept of interest, feasibility, construct validity, and discrimination (including reliability and the ability to distinguish between two groups such as responders and nonresponders). Breakout groups then discussed one of the six tools being considered and voted on match to the domain of interest and feasibility for RCTs.
In a skin session, ongoing efforts to standardize and simplify the measurement of skin psoriasis in both the clinic and RCTs were presented. While the Psoriasis Area and Severity Index (PASI) is the current standard for measuring psoriasis severity and response to therapy in RCTs, the PASI is challenging for use in clinical practice. Joseph Merola, MD, presented data to support the use of the psoriasis Physician Global Assessment (PGA) x body surface area (BSA), a simpler measure than the PASI, as a potential substitute for the PASI. Updates from the International Dermatology Outcome Measures board included reporting of the psoriasis core domain set and a summary of the PsA symptoms working group’s efforts to identify patient-reported outcomes to identify and measure PsA among patients enrolled in psoriasis RCTs. April Armstrong, MD, discussed the National Psoriasis Foundation’s efforts to develop treat-to-target goals for psoriasis. Finally, Laura Coates, MBChB, PhD, presented a report on her efforts to examine a variety of psoriasis cut points for minimal disease activity and very low disease activity outcomes.
Dr. Ogdie is director of the Penn Psoriatic Arthritis Clinic at the University of Pennsylvania, Philadelphia, and is a member of the GRAPPA Steering Committee.
The agenda for this year’s Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) annual meeting included sessions on juvenile psoriatic arthritis (PsA), the microbiome in psoriasis and PsA, and setting up longitudinal cohort studies. There was also a workshop to define the clinical fit and feasibility of PsA outcome measures for clinical trials and updates on several GRAPPA-associated projects.
The meeting, held July 13-15 in Amsterdam, opened with the annual trainee session. This year’s trainee session attracted more than 40 abstracts, of which 6 were selected for oral presentations on a variety of topics including outcome measure assessment, imaging, the microbiome in PsA, and T-cell subsets in synovial fluid.
The jPsA session also emphasized the need for efforts to better define this disease entity and to study outcomes related to it. The current International League of Associations for Rheumatology criteria for juvenile idiopathic arthritis split jPsA into other subgroups (for example, enthesitis-associated arthritis, undifferentiated arthritis, etc).
Three investigators – Matt Stoll, MD, PhD, Devy Zisman, MD, and Elizabeth Mellins, MD – also presented studies of the epidemiology of jPsA.
Among the most intriguing sessions at the GRAPPA meeting was a series of three talks by Jose Scher, MD, Hok Bing Thio, MD, PhD, and Dirk Elewaut, PhD, that introduced the audience to the complexity of the micro-organisms that call the human host “home” and the potential role in the development of inflammatory conditions, in particular psoriasis and PsA. These talks touched on the potential uses of the microbiome of the gut, skin, and oral mucosa in predicting therapy response and modulating the immune system.
Dafna Gladman, MD, led a session on “how to set up a cohort.” In this session, speakers provided a road map for setting up a longitudinal cohort and discussed opportunities and challenges along the way. This session provided a foundation for a meeting that followed the annual meeting, the GRAPPA Research Collaborative Network meeting (held July 15-16). The RCN meeting aimed to develop a plan for a GRAPPA research network supporting collaborative research to identify biomarkers and outcomes in psoriasis and PsA. Speakers/panelists from academics and industry kicked off the meeting with a discussion of prior experiences in establishing international cohort studies. Subsequent sessions presented individual aspects of beginning a longitudinal cohort study for biomarkers, including methods for sample collection, regulatory processes, and data collected at potential sites; capturing and harmonizing clinical data for such studies; and policies for publication of RCN studies.
This year’s workshop was focused on the GRAPPA-Outcome Measures in Rheumatology (OMERACT) working group’s plan to develop a Core Outcome Measure Set, a collection of outcome measurement instruments to be used in randomized, controlled trials (RCTs) for PsA. During the plenary session, the team presented a process for the group to evaluate instruments, including assessment of match to the domain or concept of interest, feasibility, construct validity, and discrimination (including reliability and the ability to distinguish between two groups such as responders and nonresponders). Breakout groups then discussed one of the six tools being considered and voted on match to the domain of interest and feasibility for RCTs.
In a skin session, ongoing efforts to standardize and simplify the measurement of skin psoriasis in both the clinic and RCTs were presented. While the Psoriasis Area and Severity Index (PASI) is the current standard for measuring psoriasis severity and response to therapy in RCTs, the PASI is challenging for use in clinical practice. Joseph Merola, MD, presented data to support the use of the psoriasis Physician Global Assessment (PGA) x body surface area (BSA), a simpler measure than the PASI, as a potential substitute for the PASI. Updates from the International Dermatology Outcome Measures board included reporting of the psoriasis core domain set and a summary of the PsA symptoms working group’s efforts to identify patient-reported outcomes to identify and measure PsA among patients enrolled in psoriasis RCTs. April Armstrong, MD, discussed the National Psoriasis Foundation’s efforts to develop treat-to-target goals for psoriasis. Finally, Laura Coates, MBChB, PhD, presented a report on her efforts to examine a variety of psoriasis cut points for minimal disease activity and very low disease activity outcomes.
Dr. Ogdie is director of the Penn Psoriatic Arthritis Clinic at the University of Pennsylvania, Philadelphia, and is a member of the GRAPPA Steering Committee.
Future Hospitalist: Top 10 tips for carrying out a successful quality improvement project
Editor’s Note: This column is a quarterly feature written by members of the Physicians in Training Committee. It aims to encourage and educate students, residents, and early career hospitalists.
One of the biggest challenges early career hospitalists, residents and medical students face in launching their first quality improvement (QI) project is knowing how and where to get started. QI can be highly rewarding, but it can also take valuable time and resources without guarantees of sustainable improvement. In this article, we outline 10 key factors to consider when starting a new project.
1. Frame your project so that it aligns with your hospital’s current goals
Choose a project with your hospital’s goals in mind. Securing resources such as health IT, financial, or staffing support will prove difficult unless you get buy-in from hospital leadership. If your project does not directly address hospital goals, frame the purpose to demonstrate that it still fits with leadership priorities. For example, though improving handoffs from daytime to nighttime providers may not be a specific goal, leadership should appreciate that this project is expected to improve patient safety.
2. Be SMART about goals
Many QI projects fail because the scope of the initial project is too large, unrealistic, or vague. Creating a clear and focused aim statement and keeping it “SMART” (Specific, Measurable, Achievable, Realistic, and Timely) will bring structure to the project.1 “We will reduce Congestive Heart Failure readmissions on 5 medicine units at our hospital by 2.5% in 6 months” is an example of a SMART aim statement.
3. Involve the right people from the start
QI project disasters often start with the wrong team. Select members based on who is needed and not who is available. It is critical to include representatives or “champions” from each area that will be affected. People will buy into a new methodology much more quickly if they were engaged in its development or know that respected members in their area were involved.
4. Use a simple, systematic approach to guide improvement work
Various QI models exist and each offers a systematic approach for assessing and improving care services. The Model for Improvement developed by the Associates in Process Improvement2 is a simple and powerful framework for quality improvement that asks three questions: (1) What are we trying to accomplish? (2) How will we know a change is an improvement? (3) What changes can we make that will result in improvement? The model incorporates Plan-Do-Study-Act (PDSA) cycles to test changes on a small scale.
5. Good projects start with good background data
6. Choose interventions that are high impact, low effort
People will more easily change if the change itself is easy. So consider the question “does this intervention add significant work?” The best interventions change a process without causing undue burden to the clinicians and staff involved.
7. If you can’t measure it, you can’t improve it
After implementation, collect enough data to know whether the changes made improved the process. Study outcome, process, and balancing measures. If possible, use data already being collected by your institution. While it is critical to have quantitative measures, qualitative data such as surveys and observations can also enrich understanding.
Example: Increasing early discharges in medical unit.
Outcome measure – This is the desired outcome that the project aims to improve. This may be the percentage of discharges before noon (DBN) or the average discharge time.
Process measure – This is a measure of a specific change made to improve the outcome metric. The discharge orders may need to be placed earlier in the electronic medical record to improve DBN. This average discharge order time is an example of a process measure.
Balance measure – This metric evaluates whether the intended outcome is leading to unintended consequences. For example, tracking the readmission rate is an important balance measure to assess whether improved DBN is associated with rushed discharges and possible unsafe transitions.
8. Communicate project goals and progress
9. Manage resistance to change
“People responsible for planning and implementing change often forget that while the first task of change management is to understand the destination and how to get there, the first task of transition management is to convince people to leave home.” – William Bridges
Inertia is powerful. We may consider our continuous performance improvement initiative as “the next big thing” but others may not share this enthusiasm. We therefore need to build a compelling reason for others to become engaged and accept major changes to work flow. Different strategies may be needed depending on your audience. Though for some, data and a rational analysis will be persuasive, for others the emotional argument will be the most motivating. Share personal anecdotes and use patient stories. In addition, let providers know “what’s in it for them.” Some may have a personal interest in your project or may need QI experience for career advancement; others might be motivated by the possibilities for scholarship arising from this work.
10. Make the work count twice
Consider QI as a scholarly initiative from the start to bring rigor to the project at all phases. Describe the project in an abstract or manuscript once improvements have been made. Publication is a great way to boost team morale and help make a business case for future improvement work. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines provide an excellent framework for designing and writing up an improvement project.4 The guidelines focus on why the project was started, what was done, what was found, and what the findings mean.
Driving change is challenging, and it is tempting to jump ahead to “fixing the problem.” But implementing a successful QI project requires intelligent direction, strategic planning, and skillful execution. It is our hope that following the above tips will help you develop the best possible ideas and approach implementation in a systematic way, ultimately leading to meaningful change.
Dr. Reyna is assistant professor in the division of hospital medicine and unit medical director at Mount Sinai Medical Center in New York City. She is a Certified Clinical Microsystems Coach. Dr. Burger is associate professor and associate program director, internal medicine residency, at Mount Sinai Beth Israel. He is on the faculty for the SGIM Annual Meeting Precourse on QI and is head of the high value care committee at the department of medicine at Mount Sinai Beth Israel. Dr. Cho is assistant professor and director of quality and safety in the division of hospital medicine at Mount Sinai. He is a senior fellow at the Lown Institute.
References
1. MacLeod L. Making SMART goals smarter. Physician Exec. 2012 Mar-Apr;38(2):68-70, 72.
2. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (2nd edition). San Francisco: Jossey-Bass Publishers; 2009.
3. Nelson EC, Batalden PB, Godfrey MM. Quality By Design: A Clinical Microsystems Approach. San Francisco, California: Jossey-Bass; 2007.
4. Ogrinc G, Davies L, Goodman D et.al. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2015 Sep 14.
Editor’s Note: This column is a quarterly feature written by members of the Physicians in Training Committee. It aims to encourage and educate students, residents, and early career hospitalists.
One of the biggest challenges early career hospitalists, residents and medical students face in launching their first quality improvement (QI) project is knowing how and where to get started. QI can be highly rewarding, but it can also take valuable time and resources without guarantees of sustainable improvement. In this article, we outline 10 key factors to consider when starting a new project.
1. Frame your project so that it aligns with your hospital’s current goals
Choose a project with your hospital’s goals in mind. Securing resources such as health IT, financial, or staffing support will prove difficult unless you get buy-in from hospital leadership. If your project does not directly address hospital goals, frame the purpose to demonstrate that it still fits with leadership priorities. For example, though improving handoffs from daytime to nighttime providers may not be a specific goal, leadership should appreciate that this project is expected to improve patient safety.
2. Be SMART about goals
Many QI projects fail because the scope of the initial project is too large, unrealistic, or vague. Creating a clear and focused aim statement and keeping it “SMART” (Specific, Measurable, Achievable, Realistic, and Timely) will bring structure to the project.1 “We will reduce Congestive Heart Failure readmissions on 5 medicine units at our hospital by 2.5% in 6 months” is an example of a SMART aim statement.
3. Involve the right people from the start
QI project disasters often start with the wrong team. Select members based on who is needed and not who is available. It is critical to include representatives or “champions” from each area that will be affected. People will buy into a new methodology much more quickly if they were engaged in its development or know that respected members in their area were involved.
4. Use a simple, systematic approach to guide improvement work
Various QI models exist and each offers a systematic approach for assessing and improving care services. The Model for Improvement developed by the Associates in Process Improvement2 is a simple and powerful framework for quality improvement that asks three questions: (1) What are we trying to accomplish? (2) How will we know a change is an improvement? (3) What changes can we make that will result in improvement? The model incorporates Plan-Do-Study-Act (PDSA) cycles to test changes on a small scale.
5. Good projects start with good background data
6. Choose interventions that are high impact, low effort
People will more easily change if the change itself is easy. So consider the question “does this intervention add significant work?” The best interventions change a process without causing undue burden to the clinicians and staff involved.
7. If you can’t measure it, you can’t improve it
After implementation, collect enough data to know whether the changes made improved the process. Study outcome, process, and balancing measures. If possible, use data already being collected by your institution. While it is critical to have quantitative measures, qualitative data such as surveys and observations can also enrich understanding.
Example: Increasing early discharges in medical unit.
Outcome measure – This is the desired outcome that the project aims to improve. This may be the percentage of discharges before noon (DBN) or the average discharge time.
Process measure – This is a measure of a specific change made to improve the outcome metric. The discharge orders may need to be placed earlier in the electronic medical record to improve DBN. This average discharge order time is an example of a process measure.
Balance measure – This metric evaluates whether the intended outcome is leading to unintended consequences. For example, tracking the readmission rate is an important balance measure to assess whether improved DBN is associated with rushed discharges and possible unsafe transitions.
8. Communicate project goals and progress
9. Manage resistance to change
“People responsible for planning and implementing change often forget that while the first task of change management is to understand the destination and how to get there, the first task of transition management is to convince people to leave home.” – William Bridges
Inertia is powerful. We may consider our continuous performance improvement initiative as “the next big thing” but others may not share this enthusiasm. We therefore need to build a compelling reason for others to become engaged and accept major changes to work flow. Different strategies may be needed depending on your audience. Though for some, data and a rational analysis will be persuasive, for others the emotional argument will be the most motivating. Share personal anecdotes and use patient stories. In addition, let providers know “what’s in it for them.” Some may have a personal interest in your project or may need QI experience for career advancement; others might be motivated by the possibilities for scholarship arising from this work.
10. Make the work count twice
Consider QI as a scholarly initiative from the start to bring rigor to the project at all phases. Describe the project in an abstract or manuscript once improvements have been made. Publication is a great way to boost team morale and help make a business case for future improvement work. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines provide an excellent framework for designing and writing up an improvement project.4 The guidelines focus on why the project was started, what was done, what was found, and what the findings mean.
Driving change is challenging, and it is tempting to jump ahead to “fixing the problem.” But implementing a successful QI project requires intelligent direction, strategic planning, and skillful execution. It is our hope that following the above tips will help you develop the best possible ideas and approach implementation in a systematic way, ultimately leading to meaningful change.
Dr. Reyna is assistant professor in the division of hospital medicine and unit medical director at Mount Sinai Medical Center in New York City. She is a Certified Clinical Microsystems Coach. Dr. Burger is associate professor and associate program director, internal medicine residency, at Mount Sinai Beth Israel. He is on the faculty for the SGIM Annual Meeting Precourse on QI and is head of the high value care committee at the department of medicine at Mount Sinai Beth Israel. Dr. Cho is assistant professor and director of quality and safety in the division of hospital medicine at Mount Sinai. He is a senior fellow at the Lown Institute.
References
1. MacLeod L. Making SMART goals smarter. Physician Exec. 2012 Mar-Apr;38(2):68-70, 72.
2. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (2nd edition). San Francisco: Jossey-Bass Publishers; 2009.
3. Nelson EC, Batalden PB, Godfrey MM. Quality By Design: A Clinical Microsystems Approach. San Francisco, California: Jossey-Bass; 2007.
4. Ogrinc G, Davies L, Goodman D et.al. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2015 Sep 14.
Editor’s Note: This column is a quarterly feature written by members of the Physicians in Training Committee. It aims to encourage and educate students, residents, and early career hospitalists.
One of the biggest challenges early career hospitalists, residents and medical students face in launching their first quality improvement (QI) project is knowing how and where to get started. QI can be highly rewarding, but it can also take valuable time and resources without guarantees of sustainable improvement. In this article, we outline 10 key factors to consider when starting a new project.
1. Frame your project so that it aligns with your hospital’s current goals
Choose a project with your hospital’s goals in mind. Securing resources such as health IT, financial, or staffing support will prove difficult unless you get buy-in from hospital leadership. If your project does not directly address hospital goals, frame the purpose to demonstrate that it still fits with leadership priorities. For example, though improving handoffs from daytime to nighttime providers may not be a specific goal, leadership should appreciate that this project is expected to improve patient safety.
2. Be SMART about goals
Many QI projects fail because the scope of the initial project is too large, unrealistic, or vague. Creating a clear and focused aim statement and keeping it “SMART” (Specific, Measurable, Achievable, Realistic, and Timely) will bring structure to the project.1 “We will reduce Congestive Heart Failure readmissions on 5 medicine units at our hospital by 2.5% in 6 months” is an example of a SMART aim statement.
3. Involve the right people from the start
QI project disasters often start with the wrong team. Select members based on who is needed and not who is available. It is critical to include representatives or “champions” from each area that will be affected. People will buy into a new methodology much more quickly if they were engaged in its development or know that respected members in their area were involved.
4. Use a simple, systematic approach to guide improvement work
Various QI models exist and each offers a systematic approach for assessing and improving care services. The Model for Improvement developed by the Associates in Process Improvement2 is a simple and powerful framework for quality improvement that asks three questions: (1) What are we trying to accomplish? (2) How will we know a change is an improvement? (3) What changes can we make that will result in improvement? The model incorporates Plan-Do-Study-Act (PDSA) cycles to test changes on a small scale.
5. Good projects start with good background data
6. Choose interventions that are high impact, low effort
People will more easily change if the change itself is easy. So consider the question “does this intervention add significant work?” The best interventions change a process without causing undue burden to the clinicians and staff involved.
7. If you can’t measure it, you can’t improve it
After implementation, collect enough data to know whether the changes made improved the process. Study outcome, process, and balancing measures. If possible, use data already being collected by your institution. While it is critical to have quantitative measures, qualitative data such as surveys and observations can also enrich understanding.
Example: Increasing early discharges in medical unit.
Outcome measure – This is the desired outcome that the project aims to improve. This may be the percentage of discharges before noon (DBN) or the average discharge time.
Process measure – This is a measure of a specific change made to improve the outcome metric. The discharge orders may need to be placed earlier in the electronic medical record to improve DBN. This average discharge order time is an example of a process measure.
Balance measure – This metric evaluates whether the intended outcome is leading to unintended consequences. For example, tracking the readmission rate is an important balance measure to assess whether improved DBN is associated with rushed discharges and possible unsafe transitions.
8. Communicate project goals and progress
9. Manage resistance to change
“People responsible for planning and implementing change often forget that while the first task of change management is to understand the destination and how to get there, the first task of transition management is to convince people to leave home.” – William Bridges
Inertia is powerful. We may consider our continuous performance improvement initiative as “the next big thing” but others may not share this enthusiasm. We therefore need to build a compelling reason for others to become engaged and accept major changes to work flow. Different strategies may be needed depending on your audience. Though for some, data and a rational analysis will be persuasive, for others the emotional argument will be the most motivating. Share personal anecdotes and use patient stories. In addition, let providers know “what’s in it for them.” Some may have a personal interest in your project or may need QI experience for career advancement; others might be motivated by the possibilities for scholarship arising from this work.
10. Make the work count twice
Consider QI as a scholarly initiative from the start to bring rigor to the project at all phases. Describe the project in an abstract or manuscript once improvements have been made. Publication is a great way to boost team morale and help make a business case for future improvement work. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines provide an excellent framework for designing and writing up an improvement project.4 The guidelines focus on why the project was started, what was done, what was found, and what the findings mean.
Driving change is challenging, and it is tempting to jump ahead to “fixing the problem.” But implementing a successful QI project requires intelligent direction, strategic planning, and skillful execution. It is our hope that following the above tips will help you develop the best possible ideas and approach implementation in a systematic way, ultimately leading to meaningful change.
Dr. Reyna is assistant professor in the division of hospital medicine and unit medical director at Mount Sinai Medical Center in New York City. She is a Certified Clinical Microsystems Coach. Dr. Burger is associate professor and associate program director, internal medicine residency, at Mount Sinai Beth Israel. He is on the faculty for the SGIM Annual Meeting Precourse on QI and is head of the high value care committee at the department of medicine at Mount Sinai Beth Israel. Dr. Cho is assistant professor and director of quality and safety in the division of hospital medicine at Mount Sinai. He is a senior fellow at the Lown Institute.
References
1. MacLeod L. Making SMART goals smarter. Physician Exec. 2012 Mar-Apr;38(2):68-70, 72.
2. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (2nd edition). San Francisco: Jossey-Bass Publishers; 2009.
3. Nelson EC, Batalden PB, Godfrey MM. Quality By Design: A Clinical Microsystems Approach. San Francisco, California: Jossey-Bass; 2007.
4. Ogrinc G, Davies L, Goodman D et.al. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2015 Sep 14.
AAD president sees the specialty as ‘a bright star on the dance floor’
NEW YORK – In a plenary session at the American Academy of Dermatology summer meeting, the AAD president offered an upbeat view of the profession, likening his role in leading the 19,000-member organization to that of a dancer and comparing the specialty itself to “a bright star on the dance floor.”
The specialty, however, is facing an uncertain future. “As the music changes, so must the dance,” Henry Lim, MD, told attendees. “And so it is with American medicine today. Successfully transitioning, adapting to those changes, is especially challenging for all of medicine, including for our specialty.”
Dr. Lim’s remarks came hours after President Donald Trump’s effort to dismantle the Affordable Care Act had failed. “We are in the middle of a health care system in deep turmoil and uncertainty – as you all saw from the vote this morning,” said Dr. Lim, whose 1-year term began in March.
Dermatology is assuming an ever-greater role as the U.S. population ages, he said. “The fastest-growing segment is people over 85 and last year Hallmark reported it sold 85,000 ‘Happy 100th Birthday’ cards.”
He cited the AAD’s Burden of Skin Disease Report, which found that nearly half of Americans over the age of 65 have at least one skin disease. That may not, however, translate into job security for dermatologists, he cautioned.
“A most concerning statistic from that report is that two in every three patients with skin disease are being treated by nondermatologists,” he said. Those practitioners include primary care physicians, pediatricians, hospitalists, nurse practitioners, and physician assistants. “We all know a major reason for it is access,” said Dr. Lim, who told a reporter prior to his speech that the academy has taken no position on whether it is for or against the Affordable Care Act.
But, he added in his speech, “we have been continuing to meet with individual members of Congress, Health and Human Services leadership, and the FDA – tackling issues eroding our ability to care for patients.”
Dr. Lim, chair emeritus of the department of dermatology and senior vice president for academic affairs at Henry Ford Health System in Detroit, cited in-office compounding, step therapy, narrow network funding for medical research, and scope of practice as examples.
“Listening is the key to understanding,” he noted, and the academy is doing just that. He and the rest of the academy’s leadership have visited with a number of state societies to listen to their concerns. “It is clear to me that, while we have handled many issues well, there are areas where we as an academy can do better,” he said.
Dr. Lim cited the need to “enhance our efforts in advocacy and to improve our communication, including our social media presence.”
The academy itself is in strong shape, with more than 90% of practicing dermatologists as members, he said. That places the AAD among the top specialty societies and means that future growth will likely come from international outreach.
Dr. Lim called on members to join the effort by taking to the dance floor themselves and participating. “Ask not what dermatology can do for you, ask what you can do for dermatology,” he concluded. “With the leadership of our academy listening to all of you and working together with all of you, I’m confident that dermatology will continue to be a bright star on the dance floor.”
NEW YORK – In a plenary session at the American Academy of Dermatology summer meeting, the AAD president offered an upbeat view of the profession, likening his role in leading the 19,000-member organization to that of a dancer and comparing the specialty itself to “a bright star on the dance floor.”
The specialty, however, is facing an uncertain future. “As the music changes, so must the dance,” Henry Lim, MD, told attendees. “And so it is with American medicine today. Successfully transitioning, adapting to those changes, is especially challenging for all of medicine, including for our specialty.”
Dr. Lim’s remarks came hours after President Donald Trump’s effort to dismantle the Affordable Care Act had failed. “We are in the middle of a health care system in deep turmoil and uncertainty – as you all saw from the vote this morning,” said Dr. Lim, whose 1-year term began in March.
Dermatology is assuming an ever-greater role as the U.S. population ages, he said. “The fastest-growing segment is people over 85 and last year Hallmark reported it sold 85,000 ‘Happy 100th Birthday’ cards.”
He cited the AAD’s Burden of Skin Disease Report, which found that nearly half of Americans over the age of 65 have at least one skin disease. That may not, however, translate into job security for dermatologists, he cautioned.
“A most concerning statistic from that report is that two in every three patients with skin disease are being treated by nondermatologists,” he said. Those practitioners include primary care physicians, pediatricians, hospitalists, nurse practitioners, and physician assistants. “We all know a major reason for it is access,” said Dr. Lim, who told a reporter prior to his speech that the academy has taken no position on whether it is for or against the Affordable Care Act.
But, he added in his speech, “we have been continuing to meet with individual members of Congress, Health and Human Services leadership, and the FDA – tackling issues eroding our ability to care for patients.”
Dr. Lim, chair emeritus of the department of dermatology and senior vice president for academic affairs at Henry Ford Health System in Detroit, cited in-office compounding, step therapy, narrow network funding for medical research, and scope of practice as examples.
“Listening is the key to understanding,” he noted, and the academy is doing just that. He and the rest of the academy’s leadership have visited with a number of state societies to listen to their concerns. “It is clear to me that, while we have handled many issues well, there are areas where we as an academy can do better,” he said.
Dr. Lim cited the need to “enhance our efforts in advocacy and to improve our communication, including our social media presence.”
The academy itself is in strong shape, with more than 90% of practicing dermatologists as members, he said. That places the AAD among the top specialty societies and means that future growth will likely come from international outreach.
Dr. Lim called on members to join the effort by taking to the dance floor themselves and participating. “Ask not what dermatology can do for you, ask what you can do for dermatology,” he concluded. “With the leadership of our academy listening to all of you and working together with all of you, I’m confident that dermatology will continue to be a bright star on the dance floor.”
NEW YORK – In a plenary session at the American Academy of Dermatology summer meeting, the AAD president offered an upbeat view of the profession, likening his role in leading the 19,000-member organization to that of a dancer and comparing the specialty itself to “a bright star on the dance floor.”
The specialty, however, is facing an uncertain future. “As the music changes, so must the dance,” Henry Lim, MD, told attendees. “And so it is with American medicine today. Successfully transitioning, adapting to those changes, is especially challenging for all of medicine, including for our specialty.”
Dr. Lim’s remarks came hours after President Donald Trump’s effort to dismantle the Affordable Care Act had failed. “We are in the middle of a health care system in deep turmoil and uncertainty – as you all saw from the vote this morning,” said Dr. Lim, whose 1-year term began in March.
Dermatology is assuming an ever-greater role as the U.S. population ages, he said. “The fastest-growing segment is people over 85 and last year Hallmark reported it sold 85,000 ‘Happy 100th Birthday’ cards.”
He cited the AAD’s Burden of Skin Disease Report, which found that nearly half of Americans over the age of 65 have at least one skin disease. That may not, however, translate into job security for dermatologists, he cautioned.
“A most concerning statistic from that report is that two in every three patients with skin disease are being treated by nondermatologists,” he said. Those practitioners include primary care physicians, pediatricians, hospitalists, nurse practitioners, and physician assistants. “We all know a major reason for it is access,” said Dr. Lim, who told a reporter prior to his speech that the academy has taken no position on whether it is for or against the Affordable Care Act.
But, he added in his speech, “we have been continuing to meet with individual members of Congress, Health and Human Services leadership, and the FDA – tackling issues eroding our ability to care for patients.”
Dr. Lim, chair emeritus of the department of dermatology and senior vice president for academic affairs at Henry Ford Health System in Detroit, cited in-office compounding, step therapy, narrow network funding for medical research, and scope of practice as examples.
“Listening is the key to understanding,” he noted, and the academy is doing just that. He and the rest of the academy’s leadership have visited with a number of state societies to listen to their concerns. “It is clear to me that, while we have handled many issues well, there are areas where we as an academy can do better,” he said.
Dr. Lim cited the need to “enhance our efforts in advocacy and to improve our communication, including our social media presence.”
The academy itself is in strong shape, with more than 90% of practicing dermatologists as members, he said. That places the AAD among the top specialty societies and means that future growth will likely come from international outreach.
Dr. Lim called on members to join the effort by taking to the dance floor themselves and participating. “Ask not what dermatology can do for you, ask what you can do for dermatology,” he concluded. “With the leadership of our academy listening to all of you and working together with all of you, I’m confident that dermatology will continue to be a bright star on the dance floor.”
AT THE 2017 AAD SUMMER MEETING
Patrick Conway leaves CMS for Blue Cross and Blue Shield
Patrick Conway, MD deputy administrator for innovation and quality at the Centers for Medicare and Medicaid Services, is departing his government post to take the reigns of Blue Cross and Blue Shield of North Carolina (Blue Cross NC).
In an Aug. 8 statement, Blue Cross NC announced that Dr. Conway will start as the insurer’s new president and CEO on Oct. 1. Blue Cross NC’s role in transforming the health care system in North Carolina is both a model for other plans and a system that Dr. Conway is excited to further improve, he said in a statement.
Blue Cross NC Board of Trustees Chair Frank Holding Jr. called Dr. Conway a national and international leader in health system transformation, quality, and innovation who will further advance Blue Cross NC’s goals.
“His unique experiences as a health care provider and as a leader of the world’s largest health care payor will help Blue Cross NC fulfill its mission to improve the health and well-being of our customers and communities,” Mr. Holding said in the statement.
Dr. Conway joined CMS in 2011 as the agency’s chief medical officer and ultimately became the agency’s deputy administrator for innovation and quality and director of the Center for Medicare and Medicaid Innovation. Following President Obama’s departure from office, Dr. Conway took over as acting CMS administrator for then-CMS principal deputy administrator Andy Slavitt until new administrator Seema Verma assumed the position in March.
A longtime pediatric hospitalist, Dr. Conway was selected as a Master of Hospital Medicine by the Society of Hospital Medicine. He also was elected to the Health and Medicine Division of the National Academies of Sciences, Engineering, and Medicine in 2014. Prior to joining CMS, Dr. Conway oversaw clinical operations and research at Cincinnati Children’s Hospital Medical Center as director of hospital medicine, with a focus on improving patient outcomes across the health system.
[email protected]
On Twitter @legal_med
Patrick Conway, MD deputy administrator for innovation and quality at the Centers for Medicare and Medicaid Services, is departing his government post to take the reigns of Blue Cross and Blue Shield of North Carolina (Blue Cross NC).
In an Aug. 8 statement, Blue Cross NC announced that Dr. Conway will start as the insurer’s new president and CEO on Oct. 1. Blue Cross NC’s role in transforming the health care system in North Carolina is both a model for other plans and a system that Dr. Conway is excited to further improve, he said in a statement.
Blue Cross NC Board of Trustees Chair Frank Holding Jr. called Dr. Conway a national and international leader in health system transformation, quality, and innovation who will further advance Blue Cross NC’s goals.
“His unique experiences as a health care provider and as a leader of the world’s largest health care payor will help Blue Cross NC fulfill its mission to improve the health and well-being of our customers and communities,” Mr. Holding said in the statement.
Dr. Conway joined CMS in 2011 as the agency’s chief medical officer and ultimately became the agency’s deputy administrator for innovation and quality and director of the Center for Medicare and Medicaid Innovation. Following President Obama’s departure from office, Dr. Conway took over as acting CMS administrator for then-CMS principal deputy administrator Andy Slavitt until new administrator Seema Verma assumed the position in March.
A longtime pediatric hospitalist, Dr. Conway was selected as a Master of Hospital Medicine by the Society of Hospital Medicine. He also was elected to the Health and Medicine Division of the National Academies of Sciences, Engineering, and Medicine in 2014. Prior to joining CMS, Dr. Conway oversaw clinical operations and research at Cincinnati Children’s Hospital Medical Center as director of hospital medicine, with a focus on improving patient outcomes across the health system.
[email protected]
On Twitter @legal_med
Patrick Conway, MD deputy administrator for innovation and quality at the Centers for Medicare and Medicaid Services, is departing his government post to take the reigns of Blue Cross and Blue Shield of North Carolina (Blue Cross NC).
In an Aug. 8 statement, Blue Cross NC announced that Dr. Conway will start as the insurer’s new president and CEO on Oct. 1. Blue Cross NC’s role in transforming the health care system in North Carolina is both a model for other plans and a system that Dr. Conway is excited to further improve, he said in a statement.
Blue Cross NC Board of Trustees Chair Frank Holding Jr. called Dr. Conway a national and international leader in health system transformation, quality, and innovation who will further advance Blue Cross NC’s goals.
“His unique experiences as a health care provider and as a leader of the world’s largest health care payor will help Blue Cross NC fulfill its mission to improve the health and well-being of our customers and communities,” Mr. Holding said in the statement.
Dr. Conway joined CMS in 2011 as the agency’s chief medical officer and ultimately became the agency’s deputy administrator for innovation and quality and director of the Center for Medicare and Medicaid Innovation. Following President Obama’s departure from office, Dr. Conway took over as acting CMS administrator for then-CMS principal deputy administrator Andy Slavitt until new administrator Seema Verma assumed the position in March.
A longtime pediatric hospitalist, Dr. Conway was selected as a Master of Hospital Medicine by the Society of Hospital Medicine. He also was elected to the Health and Medicine Division of the National Academies of Sciences, Engineering, and Medicine in 2014. Prior to joining CMS, Dr. Conway oversaw clinical operations and research at Cincinnati Children’s Hospital Medical Center as director of hospital medicine, with a focus on improving patient outcomes across the health system.
[email protected]
On Twitter @legal_med
Social Interaction May Enhance Patient Survival After Chemotherapy
Being with other patients during chemotherapy—rather than isolated—may have long-term effects after chemotherapy, according to researchers from the National Human Genome Research Institute and the University of Oxford in the United Kingdom.
“People model behavior based on what’s around them,” said Jeff Lienert, lead author. “For example, you will often eat more when you’re dining with friends, even if you can’t see what they’re eating.” The researchers wanted to find out whether similar social interaction would influence chemotherapy patients.
Related: Quality of Supportive Care for Patients With Advanced Lung Cancer in the VHA
The researchers used data on 4,691 cancer patients. As a proxy for social connection, the researchers gathered information on when patients checked in and out of the chemotherapy ward and how long they spent there, in “a small intimate space” where people could interact for a long period.
“Co-presence matters,” the researchers say. They found that when patients were around other patients who died in less than 5 years, they had a 72% chance of dying within 5 years. The best outcome, the researchers said, was when patients interacted with someone who survived for 5 or more years: Their risk of dying within 5 years dropped to 68%. “Being connected to a single survivor is similarly protective,” the researchers concluded, “as being connected to a single nonsurvivor is deleterious to patient survival.”
Because the study focused on “mere co-presence”—that is, just being together—their findings likely underestimate the influence of social forces, the researchers say. However, they note that “just being around others receiving treatment with similar stressors does not seem to impart any health effects”—suggesting that social facilitation and social support are not the underlying influence mechanism.
Related: Advances in Targeted Therapy for Breast Cancer
The study is the first to investigate, on a large scale, how social context in a treatment setting can play a significant role in disease outcomes. The researchers didn’t study why the difference in survival occurred, but they suggest that stress response may play a role. If a patient is unable to “fight or flee,” as in the situation of chemotherapy, Lienert says, the hormones can build. “Positive social support during the exact moments of greatest stress is crucial.”
Sources:
Lienert J, Marcum CS, Finney J, Reed-Tsochas F, Koehly L. Network Sci. 2017:1-20. https://www.cambridge.org/core/journals/network-science/article/social-influence-on-5year-survival-in-a-longitudinal-chemotherapy-ward-copresence-network/4E08D5F5A0D332AA5BB119310833A244. Accessed August 8, 2017.
National Institutes of Health. Social interaction affects cancer patients’ response to treatment. https://www.nih.gov/news-events/news-releases/social-interaction-affects-cancer-patients-response-treatment. Published July 19, 2017. Accessed August 8, 2017.
Being with other patients during chemotherapy—rather than isolated—may have long-term effects after chemotherapy, according to researchers from the National Human Genome Research Institute and the University of Oxford in the United Kingdom.
“People model behavior based on what’s around them,” said Jeff Lienert, lead author. “For example, you will often eat more when you’re dining with friends, even if you can’t see what they’re eating.” The researchers wanted to find out whether similar social interaction would influence chemotherapy patients.
Related: Quality of Supportive Care for Patients With Advanced Lung Cancer in the VHA
The researchers used data on 4,691 cancer patients. As a proxy for social connection, the researchers gathered information on when patients checked in and out of the chemotherapy ward and how long they spent there, in “a small intimate space” where people could interact for a long period.
“Co-presence matters,” the researchers say. They found that when patients were around other patients who died in less than 5 years, they had a 72% chance of dying within 5 years. The best outcome, the researchers said, was when patients interacted with someone who survived for 5 or more years: Their risk of dying within 5 years dropped to 68%. “Being connected to a single survivor is similarly protective,” the researchers concluded, “as being connected to a single nonsurvivor is deleterious to patient survival.”
Because the study focused on “mere co-presence”—that is, just being together—their findings likely underestimate the influence of social forces, the researchers say. However, they note that “just being around others receiving treatment with similar stressors does not seem to impart any health effects”—suggesting that social facilitation and social support are not the underlying influence mechanism.
Related: Advances in Targeted Therapy for Breast Cancer
The study is the first to investigate, on a large scale, how social context in a treatment setting can play a significant role in disease outcomes. The researchers didn’t study why the difference in survival occurred, but they suggest that stress response may play a role. If a patient is unable to “fight or flee,” as in the situation of chemotherapy, Lienert says, the hormones can build. “Positive social support during the exact moments of greatest stress is crucial.”
Sources:
Lienert J, Marcum CS, Finney J, Reed-Tsochas F, Koehly L. Network Sci. 2017:1-20. https://www.cambridge.org/core/journals/network-science/article/social-influence-on-5year-survival-in-a-longitudinal-chemotherapy-ward-copresence-network/4E08D5F5A0D332AA5BB119310833A244. Accessed August 8, 2017.
National Institutes of Health. Social interaction affects cancer patients’ response to treatment. https://www.nih.gov/news-events/news-releases/social-interaction-affects-cancer-patients-response-treatment. Published July 19, 2017. Accessed August 8, 2017.
Being with other patients during chemotherapy—rather than isolated—may have long-term effects after chemotherapy, according to researchers from the National Human Genome Research Institute and the University of Oxford in the United Kingdom.
“People model behavior based on what’s around them,” said Jeff Lienert, lead author. “For example, you will often eat more when you’re dining with friends, even if you can’t see what they’re eating.” The researchers wanted to find out whether similar social interaction would influence chemotherapy patients.
Related: Quality of Supportive Care for Patients With Advanced Lung Cancer in the VHA
The researchers used data on 4,691 cancer patients. As a proxy for social connection, the researchers gathered information on when patients checked in and out of the chemotherapy ward and how long they spent there, in “a small intimate space” where people could interact for a long period.
“Co-presence matters,” the researchers say. They found that when patients were around other patients who died in less than 5 years, they had a 72% chance of dying within 5 years. The best outcome, the researchers said, was when patients interacted with someone who survived for 5 or more years: Their risk of dying within 5 years dropped to 68%. “Being connected to a single survivor is similarly protective,” the researchers concluded, “as being connected to a single nonsurvivor is deleterious to patient survival.”
Because the study focused on “mere co-presence”—that is, just being together—their findings likely underestimate the influence of social forces, the researchers say. However, they note that “just being around others receiving treatment with similar stressors does not seem to impart any health effects”—suggesting that social facilitation and social support are not the underlying influence mechanism.
Related: Advances in Targeted Therapy for Breast Cancer
The study is the first to investigate, on a large scale, how social context in a treatment setting can play a significant role in disease outcomes. The researchers didn’t study why the difference in survival occurred, but they suggest that stress response may play a role. If a patient is unable to “fight or flee,” as in the situation of chemotherapy, Lienert says, the hormones can build. “Positive social support during the exact moments of greatest stress is crucial.”
Sources:
Lienert J, Marcum CS, Finney J, Reed-Tsochas F, Koehly L. Network Sci. 2017:1-20. https://www.cambridge.org/core/journals/network-science/article/social-influence-on-5year-survival-in-a-longitudinal-chemotherapy-ward-copresence-network/4E08D5F5A0D332AA5BB119310833A244. Accessed August 8, 2017.
National Institutes of Health. Social interaction affects cancer patients’ response to treatment. https://www.nih.gov/news-events/news-releases/social-interaction-affects-cancer-patients-response-treatment. Published July 19, 2017. Accessed August 8, 2017.
Perplexingly Purple
A 55-year-old woman presents for evaluation of a widespread rash that first appeared several months ago. The rash has resisted treatment with various OTC products—antifungal cream (tolnaftate), triple-antibiotic cream, and tea tree oil—and continues to itch terribly at times.
Her primary care provider prescribed oral terbinafine (250 mg/d) for a proposed fungal etiology after viewing the rash with a Wood lamp and performing a KOH prep. A one-month course yielded no relief.
The patient claims to be in good health otherwise. She does admit to going through a stressful period involving job loss, divorce, and care of her aging parents.
EXAMINATION
Multiple papulosquamous papules, nodules, and plaques are located on the patient’s arms, legs, wrists, and sacrum. The lesions range from pinpoint to several centimeters and oval to polygonal. They have a striking purple appearance. On closer inspection, many have a shiny, whitish sheen on the surface.
What is the diagnosis?
This is a classic case of lichen planus (LP), an unusual papulosquamous condition of unknown etiology. In addition to the mentioned areas, it can affect the mucosal surfaces, scalp, genitals, nails, and internal organs.
There is no evidence that the cause is infectious; rather, it appears to be triggered by a reaction to an unknown (possibly autoimmune) antigen. LP targets specific cells and produces a lichenoid reaction, in which the upper level of the dermis is broken down by an apoptotic process. This produces a characteristic sawtooth pattern at the dermoepidermal junction by a pathognomic lymphocytic infiltrate. The surface of the affected skin has a shiny, frosty appearance, also seen in other lichenoid conditions (eg, lichen sclerosus et atrophicus).
Other key diagnostic features are classified as “the Ps”: purple, plaquish, papular, planar (flat-topped), polygonal (multi-angular), penile, pruritic, and puzzling. The last word is key to triggering consideration of the other “Ps.”
LP that affects the scalp and mucosal surfaces can be problematic to treat. Topical steroids are the mainstay, but treatment can also include oral retinoids (acetretin, isotretinoin), methotrexate, antimalarials, and cyclosporine. Furthermore, LP can overlap with other diseases—notably lupus, for which TNF-[a] inhibitors are used with some success. For limited disease, intralesional steroid injections can be used (5 to 10 mg/cc triamcinolone).
This patient achieved good relief with a combination of topical and intralesional steroids. Fortunately, in most cases, the disease eventually resolves on its own.
TAKE-HOME LEARNING POINTS
- Lichen planus is an unusual condition, but the look and distribution seen in this case are fairly typical.
- The classic diagnostic “Ps” include: purple, papular, pruritic, plaquish, planar, and most of all puzzling.
- Wood lamp examination is useless for the most common dermatophytoses, which will not fluoresce; however, KOH preps are useful when correctly interpreted.
- In more obscure cases, a punch or shave biopsy will confirm the diagnosis.
A 55-year-old woman presents for evaluation of a widespread rash that first appeared several months ago. The rash has resisted treatment with various OTC products—antifungal cream (tolnaftate), triple-antibiotic cream, and tea tree oil—and continues to itch terribly at times.
Her primary care provider prescribed oral terbinafine (250 mg/d) for a proposed fungal etiology after viewing the rash with a Wood lamp and performing a KOH prep. A one-month course yielded no relief.
The patient claims to be in good health otherwise. She does admit to going through a stressful period involving job loss, divorce, and care of her aging parents.
EXAMINATION
Multiple papulosquamous papules, nodules, and plaques are located on the patient’s arms, legs, wrists, and sacrum. The lesions range from pinpoint to several centimeters and oval to polygonal. They have a striking purple appearance. On closer inspection, many have a shiny, whitish sheen on the surface.
What is the diagnosis?
This is a classic case of lichen planus (LP), an unusual papulosquamous condition of unknown etiology. In addition to the mentioned areas, it can affect the mucosal surfaces, scalp, genitals, nails, and internal organs.
There is no evidence that the cause is infectious; rather, it appears to be triggered by a reaction to an unknown (possibly autoimmune) antigen. LP targets specific cells and produces a lichenoid reaction, in which the upper level of the dermis is broken down by an apoptotic process. This produces a characteristic sawtooth pattern at the dermoepidermal junction by a pathognomic lymphocytic infiltrate. The surface of the affected skin has a shiny, frosty appearance, also seen in other lichenoid conditions (eg, lichen sclerosus et atrophicus).
Other key diagnostic features are classified as “the Ps”: purple, plaquish, papular, planar (flat-topped), polygonal (multi-angular), penile, pruritic, and puzzling. The last word is key to triggering consideration of the other “Ps.”
LP that affects the scalp and mucosal surfaces can be problematic to treat. Topical steroids are the mainstay, but treatment can also include oral retinoids (acetretin, isotretinoin), methotrexate, antimalarials, and cyclosporine. Furthermore, LP can overlap with other diseases—notably lupus, for which TNF-[a] inhibitors are used with some success. For limited disease, intralesional steroid injections can be used (5 to 10 mg/cc triamcinolone).
This patient achieved good relief with a combination of topical and intralesional steroids. Fortunately, in most cases, the disease eventually resolves on its own.
TAKE-HOME LEARNING POINTS
- Lichen planus is an unusual condition, but the look and distribution seen in this case are fairly typical.
- The classic diagnostic “Ps” include: purple, papular, pruritic, plaquish, planar, and most of all puzzling.
- Wood lamp examination is useless for the most common dermatophytoses, which will not fluoresce; however, KOH preps are useful when correctly interpreted.
- In more obscure cases, a punch or shave biopsy will confirm the diagnosis.
A 55-year-old woman presents for evaluation of a widespread rash that first appeared several months ago. The rash has resisted treatment with various OTC products—antifungal cream (tolnaftate), triple-antibiotic cream, and tea tree oil—and continues to itch terribly at times.
Her primary care provider prescribed oral terbinafine (250 mg/d) for a proposed fungal etiology after viewing the rash with a Wood lamp and performing a KOH prep. A one-month course yielded no relief.
The patient claims to be in good health otherwise. She does admit to going through a stressful period involving job loss, divorce, and care of her aging parents.
EXAMINATION
Multiple papulosquamous papules, nodules, and plaques are located on the patient’s arms, legs, wrists, and sacrum. The lesions range from pinpoint to several centimeters and oval to polygonal. They have a striking purple appearance. On closer inspection, many have a shiny, whitish sheen on the surface.
What is the diagnosis?
This is a classic case of lichen planus (LP), an unusual papulosquamous condition of unknown etiology. In addition to the mentioned areas, it can affect the mucosal surfaces, scalp, genitals, nails, and internal organs.
There is no evidence that the cause is infectious; rather, it appears to be triggered by a reaction to an unknown (possibly autoimmune) antigen. LP targets specific cells and produces a lichenoid reaction, in which the upper level of the dermis is broken down by an apoptotic process. This produces a characteristic sawtooth pattern at the dermoepidermal junction by a pathognomic lymphocytic infiltrate. The surface of the affected skin has a shiny, frosty appearance, also seen in other lichenoid conditions (eg, lichen sclerosus et atrophicus).
Other key diagnostic features are classified as “the Ps”: purple, plaquish, papular, planar (flat-topped), polygonal (multi-angular), penile, pruritic, and puzzling. The last word is key to triggering consideration of the other “Ps.”
LP that affects the scalp and mucosal surfaces can be problematic to treat. Topical steroids are the mainstay, but treatment can also include oral retinoids (acetretin, isotretinoin), methotrexate, antimalarials, and cyclosporine. Furthermore, LP can overlap with other diseases—notably lupus, for which TNF-[a] inhibitors are used with some success. For limited disease, intralesional steroid injections can be used (5 to 10 mg/cc triamcinolone).
This patient achieved good relief with a combination of topical and intralesional steroids. Fortunately, in most cases, the disease eventually resolves on its own.
TAKE-HOME LEARNING POINTS
- Lichen planus is an unusual condition, but the look and distribution seen in this case are fairly typical.
- The classic diagnostic “Ps” include: purple, papular, pruritic, plaquish, planar, and most of all puzzling.
- Wood lamp examination is useless for the most common dermatophytoses, which will not fluoresce; however, KOH preps are useful when correctly interpreted.
- In more obscure cases, a punch or shave biopsy will confirm the diagnosis.
Strategy could reduce myelosuppression in AML
Researchers believe they may have found a way to prevent chemotherapy-induced myelosuppression in acute myeloid leukemia (AML).
The team found that priming mice with the FLT3 inhibitor quizartinib protected multipotent progenitor cells (MPPs) from subsequent treatment with fluorouracil (5-FU) or gemcitabine.
And treatment with quizartinib followed by 5-FU proved more effective against AML than standard induction with cytarabine and doxorubicin.
Samuel Taylor, of the University of Western Australia in Crawley, Australia, and his colleagues reported these results in Science Translational Medicine.
The researchers first found that quizartinib induced “rapid and transient” quiescence of MPPs in C57BL/6 mice.
Quizartinib also provided MPPs with “marked protection” from 5-FU. In these experiments, a 10 mg/kg dose of quizartinib was given to mice at the same time as a 150 mg/kg dose of 5-FU. This treatment provided MPPs with 4- to 5-fold greater protection than vehicle control.
Subsequent experiments revealed the optimal dose and schedule for quizartinib. A priming dose of 30 mg/kg given 6 hours before 5-FU provided “slightly greater” protection to hematopoietic stem and progenitor cells than a 10 mg/kg dose, with significantly greater protection observed for short-term hematopoietic stem cells.
The researchers then showed that priming with quizartinib allowed for “rapid recovery of bone marrow cellularity” after treatment with 5-FU. Bone marrow cells were fully restored by day 8 after treatment in quizartinib-primed mice but not in vehicle-primed mice.
Quizartinib priming also protected mice from multiple rounds of treatment with 5-FU (15 cycles in some mice) and from myelosuppression induced by gemcitabine.
Finally, the researchers tested quizartinib followed by 5-FU in mouse models of AML. They found the treatment was more effective than treatment with cytarabine and doxorubicin in both FLT3-ITD(F692L)/NPM1c AML and NPM1c/NrasG12D AML.
FLT3-ITD(F692L)/NPM1c AML
The researchers transplanted 15 non-irradiated B6.CD45.1 mice with 3 × 105 spleen cells each from a FLT3-ITD(F691L)/NPM1c mouse that succumbed to AML at 6 weeks of age. Sixteen days after transplant, the mice were given one of the following:
- No treatment
- 10-day cycles of quizartinib (30 mg/kg) followed 6 hours later by 5-FU (150 mg/kg)
- Cytarabine plus doxorubicin (5+3).
All 5 of the untreated mice died within 30 days of transplantation, exhibiting high white blood cell (WBC) counts and splenomegaly.
The 5+3 mice received 2 cycles of treatment (days 16 to 21 and 36 to 41). All 5 had died by day 56 after transplantation, with high WBC counts and splenomegaly.
One the other hand, 4 of the 5 mice in the quizartinib/5-FU arm were still healthy at 176 days after transplantation and 80 days after stopping treatment. There were no detectable CD45.2+ AML cells when the mice were last bled on day 160, and they had normal WBC counts. There were no AML cells detectable in the animals’ bone marrow after they were killed at day 176.
The quizartinib/5-FU mouse that died before day 176 is believed to have developed resistance to 5-FU. This animal died 121 days after transplantation.
NPM1c/ NrasG12D AML
For another AML model, the researchers crossed NPM1c-mutant mice with NrasG12D-mutant mice. The team transplanted spleen cells from NPM1c/NrasG12D leukemic mice into 15 non-irradiated B6.CD45.1 recipient mice.
Fifteen days after transplantation, the NPM1c/ NrasG12D mice received one of the following:
- No treatment
- Quizartinib and 5-FU as above
- Cytarabine plus doxorubicin (5+3).
All 5 untreated mice died by day 32 after transplantation, and all 5 mice that received 5+3 died by day 35. Both groups of mice had high WBC counts and splenomegaly.
Mice in the quizartinib/5-FU arm initially received 4 cycles of treatment, starting on days 15, 25, 35, and 45 after transplantation. On day 53, they had minimal or undetectable numbers of CD45.2+ AML cells, and WBC counts were normal or slightly below normal.
At day 81—a month after stopping treatment—4 of the mice had detectable CD45.2+ AML cells in their blood. So they restarted treatment the next day. After 4 additional cycles, AML cells were undetectable in all 5 mice. At day 146—a month after stopping the second round of treatment—AML cells again became detectable in the blood.
The mice did not receive any additional treatment. One died at day 196, and 1 was killed at day 197 due to weight loss related to feeding difficulties (but this mouse did not show signs of AML).
The other 3 mice were “active and healthy” until they were killed at day 214. However, they had “high proportions” of CD45.2+ myeloid cells in their blood since day 183. And 2 of the mice had increased WBC counts from day 197.
Researchers believe they may have found a way to prevent chemotherapy-induced myelosuppression in acute myeloid leukemia (AML).
The team found that priming mice with the FLT3 inhibitor quizartinib protected multipotent progenitor cells (MPPs) from subsequent treatment with fluorouracil (5-FU) or gemcitabine.
And treatment with quizartinib followed by 5-FU proved more effective against AML than standard induction with cytarabine and doxorubicin.
Samuel Taylor, of the University of Western Australia in Crawley, Australia, and his colleagues reported these results in Science Translational Medicine.
The researchers first found that quizartinib induced “rapid and transient” quiescence of MPPs in C57BL/6 mice.
Quizartinib also provided MPPs with “marked protection” from 5-FU. In these experiments, a 10 mg/kg dose of quizartinib was given to mice at the same time as a 150 mg/kg dose of 5-FU. This treatment provided MPPs with 4- to 5-fold greater protection than vehicle control.
Subsequent experiments revealed the optimal dose and schedule for quizartinib. A priming dose of 30 mg/kg given 6 hours before 5-FU provided “slightly greater” protection to hematopoietic stem and progenitor cells than a 10 mg/kg dose, with significantly greater protection observed for short-term hematopoietic stem cells.
The researchers then showed that priming with quizartinib allowed for “rapid recovery of bone marrow cellularity” after treatment with 5-FU. Bone marrow cells were fully restored by day 8 after treatment in quizartinib-primed mice but not in vehicle-primed mice.
Quizartinib priming also protected mice from multiple rounds of treatment with 5-FU (15 cycles in some mice) and from myelosuppression induced by gemcitabine.
Finally, the researchers tested quizartinib followed by 5-FU in mouse models of AML. They found the treatment was more effective than treatment with cytarabine and doxorubicin in both FLT3-ITD(F692L)/NPM1c AML and NPM1c/NrasG12D AML.
FLT3-ITD(F692L)/NPM1c AML
The researchers transplanted 15 non-irradiated B6.CD45.1 mice with 3 × 105 spleen cells each from a FLT3-ITD(F691L)/NPM1c mouse that succumbed to AML at 6 weeks of age. Sixteen days after transplant, the mice were given one of the following:
- No treatment
- 10-day cycles of quizartinib (30 mg/kg) followed 6 hours later by 5-FU (150 mg/kg)
- Cytarabine plus doxorubicin (5+3).
All 5 of the untreated mice died within 30 days of transplantation, exhibiting high white blood cell (WBC) counts and splenomegaly.
The 5+3 mice received 2 cycles of treatment (days 16 to 21 and 36 to 41). All 5 had died by day 56 after transplantation, with high WBC counts and splenomegaly.
One the other hand, 4 of the 5 mice in the quizartinib/5-FU arm were still healthy at 176 days after transplantation and 80 days after stopping treatment. There were no detectable CD45.2+ AML cells when the mice were last bled on day 160, and they had normal WBC counts. There were no AML cells detectable in the animals’ bone marrow after they were killed at day 176.
The quizartinib/5-FU mouse that died before day 176 is believed to have developed resistance to 5-FU. This animal died 121 days after transplantation.
NPM1c/ NrasG12D AML
For another AML model, the researchers crossed NPM1c-mutant mice with NrasG12D-mutant mice. The team transplanted spleen cells from NPM1c/NrasG12D leukemic mice into 15 non-irradiated B6.CD45.1 recipient mice.
Fifteen days after transplantation, the NPM1c/ NrasG12D mice received one of the following:
- No treatment
- Quizartinib and 5-FU as above
- Cytarabine plus doxorubicin (5+3).
All 5 untreated mice died by day 32 after transplantation, and all 5 mice that received 5+3 died by day 35. Both groups of mice had high WBC counts and splenomegaly.
Mice in the quizartinib/5-FU arm initially received 4 cycles of treatment, starting on days 15, 25, 35, and 45 after transplantation. On day 53, they had minimal or undetectable numbers of CD45.2+ AML cells, and WBC counts were normal or slightly below normal.
At day 81—a month after stopping treatment—4 of the mice had detectable CD45.2+ AML cells in their blood. So they restarted treatment the next day. After 4 additional cycles, AML cells were undetectable in all 5 mice. At day 146—a month after stopping the second round of treatment—AML cells again became detectable in the blood.
The mice did not receive any additional treatment. One died at day 196, and 1 was killed at day 197 due to weight loss related to feeding difficulties (but this mouse did not show signs of AML).
The other 3 mice were “active and healthy” until they were killed at day 214. However, they had “high proportions” of CD45.2+ myeloid cells in their blood since day 183. And 2 of the mice had increased WBC counts from day 197.
Researchers believe they may have found a way to prevent chemotherapy-induced myelosuppression in acute myeloid leukemia (AML).
The team found that priming mice with the FLT3 inhibitor quizartinib protected multipotent progenitor cells (MPPs) from subsequent treatment with fluorouracil (5-FU) or gemcitabine.
And treatment with quizartinib followed by 5-FU proved more effective against AML than standard induction with cytarabine and doxorubicin.
Samuel Taylor, of the University of Western Australia in Crawley, Australia, and his colleagues reported these results in Science Translational Medicine.
The researchers first found that quizartinib induced “rapid and transient” quiescence of MPPs in C57BL/6 mice.
Quizartinib also provided MPPs with “marked protection” from 5-FU. In these experiments, a 10 mg/kg dose of quizartinib was given to mice at the same time as a 150 mg/kg dose of 5-FU. This treatment provided MPPs with 4- to 5-fold greater protection than vehicle control.
Subsequent experiments revealed the optimal dose and schedule for quizartinib. A priming dose of 30 mg/kg given 6 hours before 5-FU provided “slightly greater” protection to hematopoietic stem and progenitor cells than a 10 mg/kg dose, with significantly greater protection observed for short-term hematopoietic stem cells.
The researchers then showed that priming with quizartinib allowed for “rapid recovery of bone marrow cellularity” after treatment with 5-FU. Bone marrow cells were fully restored by day 8 after treatment in quizartinib-primed mice but not in vehicle-primed mice.
Quizartinib priming also protected mice from multiple rounds of treatment with 5-FU (15 cycles in some mice) and from myelosuppression induced by gemcitabine.
Finally, the researchers tested quizartinib followed by 5-FU in mouse models of AML. They found the treatment was more effective than treatment with cytarabine and doxorubicin in both FLT3-ITD(F692L)/NPM1c AML and NPM1c/NrasG12D AML.
FLT3-ITD(F692L)/NPM1c AML
The researchers transplanted 15 non-irradiated B6.CD45.1 mice with 3 × 105 spleen cells each from a FLT3-ITD(F691L)/NPM1c mouse that succumbed to AML at 6 weeks of age. Sixteen days after transplant, the mice were given one of the following:
- No treatment
- 10-day cycles of quizartinib (30 mg/kg) followed 6 hours later by 5-FU (150 mg/kg)
- Cytarabine plus doxorubicin (5+3).
All 5 of the untreated mice died within 30 days of transplantation, exhibiting high white blood cell (WBC) counts and splenomegaly.
The 5+3 mice received 2 cycles of treatment (days 16 to 21 and 36 to 41). All 5 had died by day 56 after transplantation, with high WBC counts and splenomegaly.
One the other hand, 4 of the 5 mice in the quizartinib/5-FU arm were still healthy at 176 days after transplantation and 80 days after stopping treatment. There were no detectable CD45.2+ AML cells when the mice were last bled on day 160, and they had normal WBC counts. There were no AML cells detectable in the animals’ bone marrow after they were killed at day 176.
The quizartinib/5-FU mouse that died before day 176 is believed to have developed resistance to 5-FU. This animal died 121 days after transplantation.
NPM1c/ NrasG12D AML
For another AML model, the researchers crossed NPM1c-mutant mice with NrasG12D-mutant mice. The team transplanted spleen cells from NPM1c/NrasG12D leukemic mice into 15 non-irradiated B6.CD45.1 recipient mice.
Fifteen days after transplantation, the NPM1c/ NrasG12D mice received one of the following:
- No treatment
- Quizartinib and 5-FU as above
- Cytarabine plus doxorubicin (5+3).
All 5 untreated mice died by day 32 after transplantation, and all 5 mice that received 5+3 died by day 35. Both groups of mice had high WBC counts and splenomegaly.
Mice in the quizartinib/5-FU arm initially received 4 cycles of treatment, starting on days 15, 25, 35, and 45 after transplantation. On day 53, they had minimal or undetectable numbers of CD45.2+ AML cells, and WBC counts were normal or slightly below normal.
At day 81—a month after stopping treatment—4 of the mice had detectable CD45.2+ AML cells in their blood. So they restarted treatment the next day. After 4 additional cycles, AML cells were undetectable in all 5 mice. At day 146—a month after stopping the second round of treatment—AML cells again became detectable in the blood.
The mice did not receive any additional treatment. One died at day 196, and 1 was killed at day 197 due to weight loss related to feeding difficulties (but this mouse did not show signs of AML).
The other 3 mice were “active and healthy” until they were killed at day 214. However, they had “high proportions” of CD45.2+ myeloid cells in their blood since day 183. And 2 of the mice had increased WBC counts from day 197.